forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
F0Zd3knG9j
How transformers learn structured data: insights from hierarchical filtering
[ "Jerome Garnier-Brun", "Marc Mezard", "Emanuele Moscato", "Luca Saglietti" ]
Understanding the learning process and the embedded computation in transformers is becoming a central goal for the development of interpretable AI. In the present study, we introduce a hierarchical filtering procedure for generative models of sequences on trees, allowing us to hand-tune the range of positional correlations in the data. Leveraging this controlled setting, we provide evidence that vanilla encoder-only transformers can approximate the exact inference algorithm when trained on root classification and masked language modeling tasks, and study *how* this computation is discovered and implemented. We find that correlations at larger distances, corresponding to increasing layers of the hierarchy, are sequentially included by the network during training. Moreover, by comparing attention maps from models trained with varying degrees of filtering and by probing the different encoder levels, we find clear evidence of a reconstruction of correlations on successive length scales corresponding to the various levels of the hierarchy, which we relate to a plausible implementation of the exact inference algorithm within the same architecture.
[ "Transformers", "Belief Propagation", "mechanistic explanation", "structured data", "hierarchical data model", "attention", "masked language modeling" ]
Reject
https://openreview.net/pdf?id=F0Zd3knG9j
https://openreview.net/forum?id=F0Zd3knG9j
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z8VCuq9GWV", "yaCOT754EU", "yYbPzJwy9z", "xSOUkhLbWe", "wW8h45TbBO", "vw1HyZ1JGl", "vooZfkspR6", "uZieRWTVXY", "s2yMmflBiC", "qenhE0ZDy5", "p1Jmayf4p1", "nM9anObH4G", "ixBBrzWLLu", "hq3KtxbvEg", "gedp4h1BhV", "ePQ20ywz3G", "e2OARpsGsD", "Ujs39TYgTR", "OL3PlEaTRf", "N6Z3ywgGGC", "LHXtzhTrsX", "HWKLIpjOcW", "E38xI7aSiY", "DozGPZRIIY", "8mcp657FsX", "8V0O7gdJGy", "6xgHVHT8B5", "2kSTpEBaGP", "1i958StqxJ", "11OBeQqoeI" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732541298640, 1732559164618, 1732034709506, 1732034154207, 1732548787260, 1733242202022, 1732554220636, 1732380190470, 1732033602464, 1732033361525, 1732033870713, 1733241883680, 1732565281596, 1732286240289, 1732034431238, 1730658076213, 1732627770045, 1730251879929, 1732710668416, 1732034303102, 1733204233122, 1732034499388, 1732710856961, 1729665768919, 1733340413169, 1737523652416, 1732034543350, 1732034662406, 1732034067173, 1732573345755 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4635/Reviewer_VwvV" ], [ "ICLR.cc/2025/Conference/Submission4635/Reviewer_VwvV" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ], [ "ICLR.cc/2025/Conference/Submission4635/Area_Chair_Xhyf" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ], [ "ICLR.cc/2025/Conference/Submission4635/Reviewer_c8Kw" ], [ "ICLR.cc/2025/Conference/Submission4635/Reviewer_BYbw" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ], [ "ICLR.cc/2025/Conference/Submission4635/Reviewer_c8Kw" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ], [ "ICLR.cc/2025/Conference/Submission4635/Reviewer_VwvV" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ], [ "ICLR.cc/2025/Conference/Submission4635/Reviewer_c8Kw" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ], [ "ICLR.cc/2025/Conference/Submission4635/Reviewer_BYbw" ], [ "ICLR.cc/2025/Conference/Submission4635/Area_Chair_Xhyf" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ], [ "ICLR.cc/2025/Conference/Submission4635/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your responses. I appreciate that you have taken the time to clarify the goals of paper and revise the draft.\\n\\nSo, as I understand it, the thesis of the paper now is that transformers learn to model hierarchies in bottom-up order throughout training, and the data model was designed in service of testing that hypothesis. Is that a fair characterization? Are there other contributions that should be highlighted?\\n\\nIf I may push back on your point about expressing the data model as a PCFG -- I think it would be extremely helpful to many readers (including me) to give an equivalent PCFG. I'm not asking how you would translate an arbitrary PCFG to your data model, just how you would express your data model as a PCFG for a given $k$. If it isn't possible, then the remarks about equivalence to PCFGs should be removed.\", \"quick_question\": \"What is the time complexity of the BP algorithm?\"}", "{\"comment\": \"Thanks for the clarifications! A lot of my concerns about clarity and the motivations of the paper have been answered, so I'm willing to raise my score from a 3 to a 5. The difference in contribution between Allen-Zhu & Li (2023) and this paper has also been made clearer, although I would push back a bit on claim (ii) -- they argue at some length that the transformer implements a dynamic programming algorithm, perhaps akin to CKY or the inside algorithm, albeit their argumentation on this point isn't very clear. In my opinion, the scope of the contributions of the paper are below the acceptance threshold for ICLR, which is why I have not raised it to a 6.\\n\\nI do have some advice for strengthening the contributions of the paper.\\n\\n> We provide a novel feasible implementation of the exact algorithm within the same architecture and show that probing experiments on the trained transformer qualitatively align with some of its properties.\\n\\n> Indeed, there is a large computational discount from knowing the topology of the parsing tree, compared to the inside-outside algorithm\\n\\nThese observations point to some interesting questions. Allen-Zhu & Li (2023) argued that the transformer learns to implement a DP algorithm related to CKY or inside-outside. This claim is basically wrong, because those algorithms require $O(n^3)$ time, but the transformer runs in only $O(n^2)$ time. On the other hand, your paper gives an instance of a hierarchical pattern that can be processed in $O(n)$ time. We also know from Khalighinejad et al. (2023) that transformers can approximate CKY to some extent in $O(n^2)$ time. Your BP interpretation could help to explain this gap. What is the class of PCFG that can be learned? You mentioned that knowing the topology of the tree is helpful, but the structural information is not explicitly given to the transformer -- it still needs to learn it. I think what your work shows is that once the transformer learns the topology, then it can compute marginals efficiently. Explaining the findings of Allen-Zhu & Li (2023) and Khalighinejad et al. (2023) in terms of the learnability of tree topologies and BP would be an interesting line of inquiry.\"}", "{\"title\": \"Author response (part 5)\", \"comment\": \"**(Q9) Why do you report validation accuracy but not test accuracy?** We apologize for the abuse of terminology on our part. We do not have distinct test and validation sets in our experiments. The models are trained on a training set for a fixed amount of epochs, and we simply measure the test accuracy for the final configuration of model weights (not the best). We corrected this mistake by replacing validation accuracy with test accuracy throughout the paper. **Why does the accuracy go up and then down?** This behavior is very important for the interpretation of the learning dynamics. We will clarify this point in the revised version, given that the current presentation was ineffective. In a nutshell, the accuracy goes up and down on test sets generated from different (higher) filtering data than the training set, and therefore OOD from the perspective of the training model. The explanation comes from the fact that the transformer discovers the existence of higher hierarchical correlation levels (i.e., longer-range correlations) sequentially, during training. It starts by imputing a simplistic explanation for the data (e.g., a correlation length of 2), thus initially increasing its accuracy on the corresponding OOD data. But then an additional correlation level (e.g., correlation length of 4) is imputed, and the transformer stirs its predictions accordingly. At this point, the accuracy on the simpler, more factorized data decreases, since the model is learning to assume a richer correlation structure between the symbols, which is not present in the simpler OOD data. This staircase behavior is novel in our work relative to all the PCGF publications mentioned above. We believe that it may be a more general feature of deep network learning dynamics (Refinetti et al., 2023; Bardone & Goldt, 2024; Rende et al., 2024). **Did you not use the best checkpoint when evaluating on OOD data?** As stated above, we used the model weights obtained at the end of training. Our objective is to understand the inner workings of the learning, not to achieve the best possible performance on the OOD data.\\n\\n**(Q10) Why use accuracy instead of perplexity for MLM? Since the CFG can be ambiguous, there isn't only one correct answer, right?** This is a fair remark, which was also indirectly raised by reviewer c8Kw. We have taken it into account in our revised version, including new experiments that we believe strengthen our evidence. We measured the Kullback-Leibler divergence between the BP marginals and the softmax (instead of argmax for a prediction) of the transformer output. This is not exactly the perplexity, however, we believe that it is more relevant for our point of understanding how the networks perform inference while taking into account the necessarily ambiguous nature of the prediction. We find that the KL divergence decreases in training following the same staircase scenario on the filtered models as described in Q9, and show that the full probabilistic prediction matches the exact one given by BP, and not only the accuracy.\\n\\n**References:**\\n\\nAllen-Zhu, Z., & Li, Y. (2023). Physics of language models: Part 1, context-free grammar. arXiv preprint arXiv:2305.13673.\\n\\nBardone, L., & Goldt, S. (2024). Sliding down the stairs: how correlated latent variables accelerate learning with neural networks. arXiv preprint arXiv:2404.08602.\\n\\nBehrens, F., Biggio, L., & Zdeborov\\u00e1, L. (2024). Understanding counting in small transformers: The interplay between attention and feed-forward layers. In ICML 2024 Workshop on Mechanistic Interpretability.\\n\\nKhalighinejad, G., Liu, O., & Wiseman, S. (2023). Approximating CKY with Transformers. arXiv preprint arXiv:2305.02386.\\n\\nKrzakala, F., & Zdeborov\\u00e1, L. (2009). Hiding quiet solutions in random constraint satisfaction problems. Physical review letters, 102(23), 238701.\\n\\nMossel, E., Neeman, J., & Sly, A. (2014). Belief propagation, robust reconstruction and optimal recovery of block models. In Conference on Learning Theory (pp. 356-370). PMLR.\\n\\nRefinetti, M., Ingrosso, A., & Goldt, S. (2023). Neural networks trained with SGD learn distributions of increasing complexity. In International Conference on Machine Learning (pp. 28843-28863). PMLR.\\n\\nRende, R., Gerace, F., Laio, A., & Goldt, S. (2024). A distributional simplicity bias in the learning dynamics of transformers. arXiv preprint arXiv:2410.19637.\\n\\nSato, T. (2007). Inside-Outside Probability Computation for Belief Propagation. In IJCAI (pp. 2605-2610).\\n\\nZhao, H., Panigrahi, A., Ge, R., & Arora, S. (2023). Do transformers parse while predicting the masked word?. arXiv preprint arXiv:2303.08117.\\n\\nZhong, Z., Liu, Z., Tegmark, M., & Andreas, J. (2024). The clock and the pizza: Two stories in mechanistic explanation of neural networks. Advances in Neural Information Processing Systems, 36.\"}", "{\"title\": \"Author response (part 2)\", \"comment\": \"To address the referee\\u2019s questions:\\n\\n**(Q1) Have authors considered measuring the match between BP and transformer predictions on individual inputs?** Again, we thank the reviewer for encouraging us to push the comparison further. Please see answer to W2.\\n\\n**(Q2) Similar behavior [\\u2026] doesn\\u2019t imply similar implementation. Why couldn\\u2019t [...] transformers [\\u2026] behave like BP on individual inputs [\\u2026] on out-of-sample data as well [\\u2026] without necessarily implementing BP?** We thank the reviewer for this key criticism, which was similarly raised by reviewer VwvV. Given the relevance of this point in the present work, we decided to substantially rewrite the related paragraphs, clarifying what we meant by \\u201clearning an implementation of BP\\u201d (now rephrased as \\u201clearning to approximate the exact inference computation\\u201d) and how we reached this conclusion. Our claims are based on the following evidence:\\n- We obtain approximately identical outputs (in root prediction and MLM) from equal inputs, both in- and out-of-sample. Note that the model is \\u201ctrained to match BP\\u201d only indirectly (we train on the correct values of the masked symbols, not on reproducing the BP marginals, which could differ) and on the training distribution. If the match was accidental, and purely driven by data-fitting, the predictions would differ in the OOD case. For our purposes, obtaining the same output on any sequence implies an equivalence in the computation.\\n- We observe the same hierarchical computational structure underlying BP arises in the transformer (see attention maps), with a sequential focus on longer correlation lengths, corresponding to the various levels of the tree. Indeed, the transformer just learns to \\u201cmodel the data well\\u201d. But in this case, it does this so well that the correct sequence of operations (underlying the exact inference procedure) is discovered. The computation doesn't need to be done precisely in the same way as in a standard BP implementation (in fact, it is embedded in high-dimensional space, and different combinations of non-linear and linear operations are employed). Still, there is a fundamental equivalence in how single token information is collected and integrated to eventually obtain the exact predictions for the masked tokens. \\n- Performing probing experiments on \\u2018ablated\\u2019 transformers, we confirm what was hinted by attention maps, that is that removing the k last attention blocks in the architecture leads a transformer that was trained on the full data model to achieve a similar performance at predicting the l-kth ancestor as one that was trained on a data model with a level of filtration k. This strongly reinforces our claim that the transformer goes up the generative tree as the tokens are passed through its architecture, as it would do if following our tentative transformer-based implementation of BP in l layers.\"}", "{\"title\": \"Re:\", \"comment\": \"We thank the referee for taking the time to go through our response and engage in this dialogue.\\n\\n**On our contributions**\\n\\nOur original goal, which we hope is more clearly stated in the revised version, is to understand the learning process and the computation of a transformer trained on our data in a mechanistic way, in the light of the exact, known inference algorithm. To achieve that, on top of the point highlighted by the reviewer,\\n* We show that the transformer reaches calibration, i.e. approximates the exact output of the inference oracle, even on OOD data. \\n* We analyze how the exact computation is embedded in the transformer weights (learning \\u201cin space\\u201d).\\n* We provide a novel feasible implementation of the exact algorithm within the same architecture and show that probing experiments on the trained transformer qualitatively align with some of its properties.\\n\\n**Equivalent PCFG formulation**\\n\\nA probabilistic context-free grammar G can be defined by a quintuple $G=\\\\left(M,T,R,S,P\\\\right)$, where $M$ is the set of non-terminal symbols, $T$ is the set of terminal symbols, $R$ is the set of production rules, $S$ is the start symbol, $P$ is the set of probabilities on production rules.\\n\\nIn our model, we consider the special case where terminals and non-terminals coincide $M=T\\\\overset{def}{=}\\\\mathcal{S}$, and there is a set of possible root symbols $\\\\mathcal{R}$. We pick $|\\\\mathcal{R}|=|\\\\mathcal{S}|=q$. The production rules $R$ and their probabilities $P$ are defined as:\\n* For $k>0$: $\\\\forall r\\\\in\\\\mathcal{R}$, all the trasitions of the type: $r\\\\rightarrow s_{1}...s_{2^{k}}$ are allowed, with probability\\n$P\\\\left(r\\\\rightarrow s_{1}...s_{2^{k}}\\\\right)=\\\\prod_{k}P^{(k)}\\\\left(s_{k}|r\\\\right)$, computed as in Eq. (1) in the revised paper. Moreover, $\\\\forall s\\\\in\\\\mathcal{S}$, we allow $q$ transitions of the type: $s\\\\rightarrow s_{1}s_{2}$ with probability $P\\\\left(s\\\\rightarrow s_{1}s_{2}\\\\right)=M_{ss_{1}s_{2}}$.\\n* For $k=0$: the production rules for the root correspond to those among the symbols: $r\\\\rightarrow s_{1}s_{2}$ with probability $P\\\\left(r\\\\rightarrow s_{1}s_{2}\\\\right)=M_{rs_{1}s_{2}}$.\\n* For $k=\\\\ell$: we only have root-to-leaves production rules of the type: $r\\\\rightarrow s_{1}...s_{2^{\\\\ell}}$, with probability $P\\\\left(r\\\\rightarrow s_{1}...s_{2^{\\\\ell}}\\\\right)=\\\\prod_{k}P^{(k)}\\\\left(s_{k}|r\\\\right)$.\\n\\nMoreover note that, since we impose a non-ambiguity constraint over the transitions, we have that $\\\\forall$ children pair $s_1,s_2$, $M_{p(s_1,s_2),s_1,s_2}\\\\neq 0$ for a single parent symbol $p(s_1,s_2)$, i.e. any pair of children symbols can come from only one parent symbol. \\n\\nFinally, we consider a fixed parsing tree: a full tree with $2^\\\\ell$ leaves. \\n\\n**Question on BP**\\n\\nTo answer the referee\\u2019s question, the time complexity of the BP algorithm on a tree is linear in the sequence length $n=2^\\\\ell$, or exponential in the tree depth, $\\\\mathcal{O}(n)$. Indeed, there is a large computational discount from knowing the topology of the parsing tree, compared to the inside-outside algorithm (which instead scales as $\\\\mathcal{O}(n^3)$).\"}", "{\"title\": \"Re 3 (2/2):\", \"comment\": \"```Thus it is actually surprising that the probe doesn\\u2019t do ~100% in figure 7 on predicting the level 3 ancestor using the first layer representation (Fig3, triangle@3). Could it be doing something different from pooling information from immediate neighbors (which is what BP would do)?```\\n\\nWe agree that this observation is somewhat surprising, at first, since it shows that the embedded computation does not follow exactly the \\u201cBP order\\u201d, however, we argue it could have been predicted from the attention maps. If one looks closely at the first layer map in the top row (i.e. the first attention layer), the attention pattern is not exactly the one we imposed in our transformer BP construction, and does not follow exactly the 2-block structure (although it does focus on this range of correlations). In order to obtain a good approximation of BP, some parts of the \\u201cmissing\\u201d recombinations must have been postponed to the following transformer layers. Some brute-force memorization of mappings between blocks of symbols and their common ancestors might have taken place, but without a supervised signal. We find it somewhat remarkable that such a discrepancy only clearly occurs in the first layer.\\n\\nThis observation is one of the reasons we do not argue that the computational steps of the transformer are in 1-to-1 correspondence with BP. Yet, traces of the BP computations can be found in the trained transformer. In the paper, we only claim a compatibility/affinity between the sequential structure.\\n\\n\\n```Current evidence still insufficient to conclude 1)```\\n\\nBoth interpretations proposed by the reviewer entail an affinity of computation between BP and the transformer, which is what we claim. Even if the transformer memorizes a table to impute the ancestors from blocks of larger sizes, it is still a sign that it is memorizing a piece of the BP computation, and not some specific input-output mapping. \\n\\n```Current evidence still insufficient to conclude 2)``` \\n\\nWe somewhat agree that our understanding of what the transformer implements cannot be straightforwardly generalized to other, more complicated graphical models. However, we would like to remind them that i) this is not something that we claim, in fact we explicitly state in our revised conclusion that carrying out similar experiments on different graphical models would be very instructive; ii) our work has to be taken in the context of current state-of-the-art mechanistic interpretation studies, which in most cases focus on very narrow e.g. arithmetic problems.\\n\\nOn the reviewer\\u2019s new remark about the non-ambiguous nature of the model: note that the structure of our transition matrices indeed rules out ambiguity going up the tree, but not for the MLM task which requires the descending messages, as demonstrated by the fact that the exact marginals from BP are not delta-like. As can be seen from the BP perspective, there is no significant increase in difficulty for the MLM task in the case of child-to-parent ambiguity, since the same computation has to be carried out. Moreover, our exact transformer implementation of BP applies to any transition tensor. Finally, we would have been happy to show some results in an ambiguous case if this point had been raised before, but now we can no longer revise our paper.\\n\\n\\n**Final remarks**\\n\\n\\nWe encourage the reviewer to revisit the weaknesses they first identified in our paper, and how we addressed them throughout this rebuttal period. Namely:\\n\\n```1) It\\u2019s not clear how much the observations made in this work generalizes to larger models, and more complicated data distributions.```\\nWe agree with this point, but as stated this is a limitation that any mechanistic interpretation study will necessarily suffer from.\\n\\n```2) The empirical evidence presented has alternative interpretations that haven't been ruled out.```\\nHere, we believe that we have added stronger evidence, better explanations and more complete discussions. This improvement has been recognized by the reviewer in our exchanges.\\n\\n```3) The construction for implementing BP on depth- tree using a transformer of only -layers could benefit from a bit more details, especially on the root-to-leaf message passing part.```\\nThis was addressed in our first updated draft, to which the reviewer responded very positively.\\n\\nOverall, we tried to take into account all the received feedback. We think our paper strongly benefitted from this, and we sincerely thank the reviewer for their engagement. However, at this point, we think it would also be fair for the reviewer\\u2019s score to reflect our efforts and the paper improvements since the original version was judged to be only marginally below threshold.\"}", "{\"title\": \"Please discuss further\", \"comment\": \"Have the authors adequately addressed your concerns about generalizability and rigor? Are the presentation changes enough to adjust your score? Rebuttal period is ending soon.\"}", "{\"title\": \"Revised version\", \"comment\": [\"Following the comments and suggestions of the reviewers, we have extensively revised our presentation and included additional experiments and discussions in our paper. We believe the clarity of the paper has substantially improved, and for this, we thank the reviewers for their constructive feedback.\", \"Here's a list of the main changes you will find in the uploaded revised version:\", \"The abstract was modified to better reflect the goals of our work.\", \"An **Our contributions** paragraph was added to the introduction, listing the main results and clarifying the novelty and the message of the paper.\", \"The main figure, Fig. 1, was reorganized to include stronger evidence of the computational equivalence between the trained transformer and the exact algorithm, and the sequential learning process in the implementation of the hierarchical correlations in the data.\", \"The section on the data model, **A model with filtered hierarchical correlations** was shortened and simplified, moving the technical descriptions that are not necessary for understanding the main results to the Appendix. Importantly, we now more clearly state the purpose of considering a simpler setting than standard CFGs.\", \"The main findings, substantially rewritten to improve clarity, are now described in two separate sections:\", \"**How transformers learn to climb the hierarchy in time**: Here, we show evidence that in the root classification and in the MLM tasks, transformers not only approach optimal accuracy but approximate the output of the exact oracle. Moreover, we show that during the learning process, the model integrates higher hierarchical levels sequentially, leading to a staircase behavior.\", \"**How transformers embed the exact inference computation**: Here, we show that when the number of transformer layers is matched to the levels in the generative tree, the computation can become interpretable. We present our feasible implementation of BP, show the analysis of the attention maps, the probing experiments at the different encoder levels, and the MLM pre-training effect.\", \"In the **Conclusions**, we now explain how our study could represent an important step in interpreting transformer computation in related settings.\", \"We included the suggested references and the citations mentioned in the answers to the reviewers.\", \"We completed and clarified the explanation of the feasible BP implementation within the transformer architecture.\", \"We hope our attempt at improving our work meets the concerns of the reviewers, and that the scores can be raised accordingly.\", \"Please feel free to let us know if there are any remaining comments not addressed. We appreciate any feedback, and we are happy to answer any further questions and adjust the manuscript.\"]}", "{\"title\": \"Author response (part 2)\", \"comment\": \"To address the referee\\u2019s questions:\\n\\n**(Q1) Why the accuracy would have a drop in the middle of training.** This behavior is very important as part of our understanding of the learning dynamics, so we regret that we have not successfully communicated this point (as pointed out also by reviewer VwvV). We are working on clarifying it. In a nutshell, the accuracy goes up and down on test sets generated from different (higher) filtering data than the training set, and therefore OOD from the perspective of the training model. The explanation comes from the fact that the transformer discovers the existence of higher hierarchical correlation levels (i.e., longer-range correlations) sequentially, during training. It starts by imputing a simplistic explanation for the data (e.g., a correlation length of 2), thus initially increasing its accuracy on the corresponding OOD data. But then an additional correlation level (e.g., correlation length of 4) is imputed, and the transformer stirs its predictions accordingly. At this point, the accuracy on the simpler, more factorized data decreases, since the model is learning to assume a richer correlation structure between the symbols, which is not present in the simpler OOD data. This staircase behavior is novel in our work and may be a more general feature of deep network learning dynamics (Refinetti et al., 2023; Bardone & Goldt, 2024; Rende et al., 2024).\\n\\n**(Q2) The writing for section 3.3 can be improved by giving more easy-to-understand intuition.** We are making an effort to improve our presentation, putting stronger accents on the main results and on their discussion, and simplifying the technical details sections. \\n\\n**References:**\\n\\nBardone, L., & Goldt, S. (2024). Sliding down the stairs: how correlated latent variables accelerate learning with neural networks. arXiv preprint arXiv:2404.08602.\\n\\nRefinetti, M., Ingrosso, A., & Goldt, S. (2023). Neural networks trained with SGD learn distributions of increasing complexity. In International Conference on Machine Learning (pp. 28843-28863). PMLR.\\n\\nRende, R., Gerace, F., Laio, A., & Goldt, S. (2024). A distributional simplicity bias in the learning dynamics of transformers. arXiv preprint arXiv:2410.19637.\\n\\nZhao, H., Panigrahi, A., Ge, R., & Arora, S. (2023). Do transformers parse while predicting the masked word?. arXiv preprint arXiv:2303.08117.\"}", "{\"title\": \"Author response (part 1)\", \"comment\": \"We thank the reviewer for carefully reading our work and providing valuable feedback, which we will use to improve our presentation. We would first like to address the overall weaknesses identified by the reviewer:\\n\\n**(W1) I'm not sure whether the transformer accuracy matches that of belief propagation is surprising \\u2026 transformer is simply learning the \\\"optimal\\\" prediction function**. We thank the referee for his comment, and we agree, in a sense. As stated in our introduction, transformers assimilate natural language that is vastly more complex than our data model. What we think is key in our study is understanding how it does so, both in time during learning, capturing the longer range correlations through a clear staircase identified thanks to the filtering, as well as in \\u2018space\\u2019 within the architecture, organizing the computation throughout its layers in an interpretable way, strikingly similar to the most obvious implementation of BP in l layers. In the context of this data model, BP is the exact (and thus optimal) oracle, so we will rephrase \\u201clearning to implement BP\\u201d as \\u201clearning to perform optimal inference\\u201d.\\n\\n**(W2) I think the writing of this paper can be improved.** We are currently working on streamlining the first half of our paper, in particular, to be more straight to the point. We will clearly highlight our contributions, which have been received with some confusion. Moreover, we added some additional experiments, that strengthen our claims in regards to the model approaching the implementation of the exact inference algorithm. \\n \\n**(W3) While the construction sec3.5 is interesting, it does not mean that the learned transformer is actually doing that.** This is entirely true, and we have added a line highlighting that this is an existence proof for the implementability of BP in an l-layer transformer, but we are not claiming that it is exactly being implemented. In fact, the described implementation is meant to be understandable and requires, for example, full disentanglement between positional and semantic information, which is not forced in our experiments. Nonetheless, we believe the existence of this implementation is non-trivial, and that it strengthens our paper significantly relative to other works that have attempted to study the parsing of CFGs in transformers, since it can be used in practice as a tool for interpretability. The state-of-the-art work of Zhao et al. (2023), notably only provides a possible implementation that requires more attention layers than BERT has. We leave a precise study of the exact relation between the theoretical implementation vs the learned implementation for future work, but it is easy to notice that the organization of the attention layers qualitatively agrees between the two, suggesting a strong link.\"}", "{\"title\": \"Response to All Reviewers\", \"comment\": \"We thank all the reviewers for their thoughtful and helpful feedback! We are pleased that:\\n\\n**Reviewer c8Kw** finds that our work is \\u201cwell-placed\\u201d in the context of mechanistic interpretability of transformers, as well as in the study of the \\u201ceffect of structured data on machine learning models\\u201d.\\n\\n**Reviewer VwvV** finds our analysis of the expected attention patterns, and the observation of sample efficiency gain via MLM pretraining \\u201cquite interesting\\u201d.\\n\\n**Reviewer BYbw** finds that our work offers a \\u201cnovel viewpoint to study transformer learning from belief propagation\\u201d and that the \\u201cCFG construction with filtering is interesting\\u201d.\\n\\nMore importantly, we thank the reviewers for carefully identifying some significant weaknesses and avenues for improvement, which pushed us to perform new experiments and undergo a significant rewriting of the paper to deliver our points in a clearer fashion. We are currently working on the revised manuscript, and will upload it as soon as possible. \\n\\nIn the meantime, we wanted to address the weaknesses and questions raised by each reviewer in individual responses below. Please do not hesitate to let us know if you have additional comments or questions, which will allow us to achieve the best possible version of our paper.\"}", "{\"title\": \"Re 3 (1/2):\", \"comment\": \"```Alternatives to the BP algorithm exist\\u2026take MLM\\u2026authors state that a wide enough two-layer net could memorize input-output mapping but wouldn\\u2019t match the BP marginals. Why must this be the case? Training objective is MLE (log-loss), eventually match the ground truth conditional distribution, assuming the two-layer net is expressive enough? Suppose by argument of scarcity of training data this alternative is ruled out```\", \"we_introduced_the_wide_two_layer_thought_experiment_to_challenge_the_statement_of_the_reviewer\": \"\\u201ca model that solves the tasks in section 3 without BP will also yield the observed patterns\\u201d. We assume the reviewer now agrees with us that this is not the case.\", \"on_their_new_remark\": \"a sufficiently wide two-layer network is a universal approximator because, in the limit, each hidden unit can focus on a specific input and map it with the second layer to the correct output, and not because any function can be exactly rewritten in terms of a (linear operation + non-linearity + linear operation).\\n\\nIn fact, with a single non-linearity, it is not possible for the network to perform all the aggregations that are needed in the proposed hierarchical model, so only a pure memorization approach could lead the model to fitting the training data (e.g. with a piece-wise linear function if the non-linearity is ReLU). When new inputs are presented, the model would at best produce a linear interpolation between the closest inputs in the training set, which does not yield a good predictor in our data model (note that changing a single symbol in the sequence can lead to a completely distinct set of ancestors). Therefore, as conceded by the reviewer, in the data-scarce regime we are considering, the good calibration on new examples, in- or out-of-sample, rules out a pure memorization strategy. \\n\\n\\n``` There exist other alternatives still - running BP on factor graphs that have equivalent distributions but different graph compared to the ground truth binary tree structured factor graph\\u2026combining two or more small factors in the ground truth graph into a single larger factor\\u2026or by marginalizing out latent variables\\u2026approximations to these solutions would behave similar to approximations to BP.```\\n\\nThe BP derivation does not introduce additional assumptions on the underlying data distribution, it is just an exact way of enforcing the correct relations between the hidden and visible variables, deriving from the top-down Markovian nature of the generative process. The alternative factor graph representations proposed by the reviewer are completely equivalent to the original BP graph, since the messages reaching the leaves will still need to be computed in the same way (the more complicated factors will still impose the true underlying hierarchical correlation structure). Therefore, we agree that all approximations of these algorithms are equivalent. \\n\\nWe don\\u2019t understand how this point contradicts any of the claims we make in the paper (see also Re 2 (1/2)). Moreover, the observed token mixing patterns (through the attention mechanism) seem to point to a rather transparent interpretation of how correlation lengths are progressively accounted for by the transformer, which is in itself a significant mechanistic interpretation contribution.\\n\\n\\n```However, a question is whether this information is simply the assignment to the corresponding block of variables\\u2026or whether it is some deeper embedding of it. The strength of the evidence varies across the layers. Difficulty perspective: to predict the upper layer variables (e.g. the root) using a two-layer readout that takes shallow embeddings of x1:16 is arguably very difficult, more difficult than predicting lower layer variables from their smaller corresponding blocks. For lower level\\u2026shallow concatenated word embeddings\\u2026is enough.```\\n\\nWe agree with the reviewer that approximating the ancestor computation for a single level might be easy with a two-layer readout once the 2-blocks are identified, and that achieving such good approximation for higher ancestors becomes increasingly difficult (likely impossible if the size of the hidden layer is kept fixed to 64). Precisely for this reason, since higher ancestor classification (including the root) is in fact achieved by the same simple read-out from the higher encoding layers, it is clear that some steps in the computation need to have taken place in the previous layers. A simple linear mixing within larger and larger blocks would not suffice, since the whole BP computation would need to be approximated by the read-out, and the reviewer agreed this would be implausible. \\n\\nMoreover, this discussion focuses only on the ancestor prediction, disregarding that this computation is auxiliary for the exact MLM inference but spontaneously appears in the transformer.\"}", "{\"title\": \"Thank you for the updated draft and your response. Some additional clarifications and concerns.\", \"comment\": \"I thank the authors for their responses, the updated draft, and the additional results. I very much appreciate the additional clarifications in the appendix on the construction of BP downward pass.\\n\\nIf I understood correctly, the claims made by this updated version is as follows:\", \"on_learning_process\": \"1. Transformers learn local dependencies before learning longer range ones.\", \"on_embedded_computation\": \"2. It approximates BP at the input-output level (i.e. matches the distribution, end-to-end).\\n3. Furthermore, it matches BP at the input-output level because it implements something like BP internally.\\n\\nI appreciate the authors for strengthening their analysis in support of 1 (e.g. Fig 1c and 1d, Fig 4 and 5) and 2 (e.g. Fig 1b). My main concern however (see question W2.1.2 in original review), is still with claim 3, which is central to the contribution of this work. \\n\\nC1. I would first like to clarify with authors whether they are arguing for claim 3 or claim 2 on lines 314-315, 331-332, and 390-392 (my current interpretation of the writing is for claim 3, but I do not believe the evidence is strong enough for 3, since evidence is only at the input-output level). My point is that a model that solves the tasks in section 3 without BP will also yield the observed patterns such as those in figure 1b, 1c, 1d, and 4, 5. I am happy to expand and discuss my reasoning about this further with the authors, but will for now focus the more significant concern I have in C2.\\n \\nC2. If I take the current position that the lines above only support claim 2, then is it fair to say that the main argument for claim 3 relies on results from the probing experiment in section 4 (Fig 7 left)?\\n\\nThere\\u2019s not enough details in the current paper to form a judgement about the validity of the argument made with Fig 7 left. Here are some important details: \\n\\nHow much training data does the probe require to perform like Fig 7 left? And how did you make sure that the probe is not overfitting the training data (i.e. that the marginals about ancestors are available in the representation of the transformer, and easily decoded by the probe, rather than learned by the probe from lots of training data)? For example, in the cited paper Zhao et al. 2023, section 4.3, their probing results involves training a probe on PCFG data and transferring the probe without much loss of accuracy to PTB data.\\n\\nTo summarize, my main concerns are as follows:\\n\\nClaim 3 is central to the paper, yet\\n1. Some arguments seem to be made about Claim 3 using results that I believe is only enough to support claim 2 (C1). (If I misunderstood, and that the authors are only arguing for claim 2 then I don\\u2019t have this concern anymore).\\n2. The most important direct evidence for Claim 3 is section 4 probing, but the details of the probing experiments are not enough to judge whether the probe has overfitted and whether traces of possible BP computation is actually in the transformer activations. (C2)\\n\\nHaoyu Zhao, Abhishek Panigrahi, Rong Ge, and Sanjeev Arora. Do transformers parse while predicting the masked word? arXiv preprint arXiv:2303.08117, 2023.\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for the response, I'll maintain my current rating for now.\"}", "{\"title\": \"Author response (part 1)\", \"comment\": \"We thank the reviewer for carefully reading our work and providing valuable feedback, which we will use to improve our presentation. We would first like to address the overall weaknesses identified by the reviewer:\\n\\n**(W1) Novelty relative to Allen-Zhu & Li (2023):** We understand the reviewer\\u2019s concern on this point, especially given the thorough experiments performed in this work on probabilistic CFGs. However, we believe that the objectives of our works are markedly different, albeit complementary. In a nutshell, their work demonstrates that GPTs, which are large and pretrained transformer-based models, have the ability to efficiently parse probabilistic context-free grammars. They notably show this by performing \\u2018invasive\\u2019 probing experiments on these large models, for instance predicting some ancestors in the (variable topology) trees characterizing the samples. However, they do not address (i) what learning dynamics the transformers may follow to achieve this\\u2013do they progressively discover the existence of ancestry through training? do they start by combining adjacent symbols through the attention and progressively include the relation between increasingly distant tokens in the sequences?\\u2013; (ii) how the trained transformers process the information within the architecture, and the possible relationship between the underlying parsing tree and the neural network; (iii) if the exact inference algorithm, here the inside-outside algorithm for probabilistic CFGs is even implementable in the neural network architecture they are studying, or if the transformers must necessarily rely on some approximation in their implementation due to some architecture related infeasibility. While Allen-Zhu & Li (2023) undoubtedly provide evidence that large transformer-based networks may implement the optimal algorithm for CFGs, we thus believe that they do not tackle the problem from a mechanistic interpretability standpoint. On the other hand, we propose a simplified setting that allows us to specifically tackle points (i)-(iii) stated above, and to gain valuable insights into the inner workings of transformers and how they learn from structured data. Finally, note that the proposed filtering procedure is only possible if tree topology (and the sequence length) is fixed, which is the case for our simplified setting, but not for general CFGs (see also Q3-Q4). Generalizing this tool, which was essential in understanding the sequential discovery of the hierarchical correlations, to the general parsing case is an interesting yet challenging future research direction.\\n\\n**(W2) Simplicity of the task:** Somewhat related to the point above, we believe that this perceived weakness of our work strongly depends on the question one sets out to answer. We agree with the referee that it is indeed not a surprise in itself that, given enough samples, transformers manage to fit the training data distribution\\u2013after all, we know that actual language is perfectly mastered by LLMs. However, our work sets out to answer the question of how transformers are able to assimilate complex probability distributions, both in the sense of the training dynamics as well as the \\u2018spatial\\u2019 organization of trained transformers and how they perform computations throughout the architecture. In this context, we believe that our setting provides an intermediate complexity between probabilistic CFGs (see the point above related to Allen-Zhu & Li (2023)), and the typical state of the art mechanistic interpretability settings that often rely on simple mathematical tasks such as modular addition (Zhong et al., 2024) or else histogram counting (Behrens et al., 2024). As also mentioned in the answer to the previous point, the fixed depth allows us to introduce the filtering procedure, which is a key interpretative element in our analysis. \\n\\n**(W3) Clarification of the goals of the paper:** We thank the reviewer for motivating us to restructure our abstract and introduction, which did not carry the goals and messages of our paper properly. In the revised version, we aim to recenter the introduction towards the title of the paper and our overarching goal of understanding how structure may be digested in transformer architectures and notably in the attention mechanism. The filtered data model is therefore a tool that allows us to achieve this goal in the context of well-controlled inference problems. We believe that applications of our insights towards practical problems, in NLP or other, are tangible. We added a discussion on the fact that the structure of data and wide-spanning correlations being progressively incorporated by the transformer during training is a feature that has been found in widely different contexts, and that may also have valuable implications for the practical design of training protocols.\"}", "{\"summary\": \"The paper investigates how transformer models make predictions on samples coming from a structured data distribution, focusing on the hypothesis that transformers implement belief propagation to make predictions.\", \"contributions\": \"1. The authors propose a novel family of synthetic data distributions based on PCFGs to test the hypothesis empirically. \\n2. The authors present experimental results on two tasks, root prediction and masked language modeling, which the authors claim to support their hypothesis.\\n3. The authors present a construction for how to implement belief propagation for a tree-structured factor graph of depth $l$, using only $l$ layers.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The work is well-placed in the context of other mechanistic interpretability work for transformers, as well as other work looking into the effect of structured data on machine learning models.\\n2. Understanding what strategy is learned by transformer models trained on structured distributions is an important problem.\\n3. The family of synthetic distributions is interesting and has a hyperparameter that allows one to control the locality of correlations between tokens in the sequence.\\n4. The authors present a novel construction for how to implement BP on a depth $l$ tree using only $l$ layers of a transformer, whereas previous constructions required $2l$ layers.\", \"weaknesses\": \"1. It\\u2019s not clear how much the observations made in this work generalizes to larger models, and more complicated data distributions.\\n2. The empirical evidence presented has alternative interpretations that haven't been ruled out. (Please see Questions.)\\n3. The construction for implementing BP on depth-$l$ tree using a transformer of only $l$-layers could benefit from a bit more details, especially on the root-to-leaf message passing part. (Please see Questions.)\", \"questions\": \"W2.1: In both the supervised root prediction task and the MLM task, the authors argue that the transformer performs similar in accuracy to BP is evidence that the transformer is implementing an approximation to BP. While it is very intriguing that the accuracies are so systematically similar, there are plausible alternative explanations (that additional experiments or analysis could rule out):\\n\\nW2.1.1: Similar accuracy doesn\\u2019t imply similar behavior on individual inputs. Have authors considered measuring the match between BP and transformer predictions on individual inputs? If the match is high, this would strengthen the authors\\u2019 claim that the model actually behaves like BP. \\n\\nW2.1.2: Similar behavior on individual inputs doesn\\u2019t imply similar implementation. For example, in figure 1(c), the authors show accuracy of a model trained on k=0 (and for root classification) data and evaluated on k>=0 data, whose accuracy is similar to running BP on the k=0 graph. The authors claim this is evidence in support of model having learned to implement BP. Why couldn\\u2019t the model have model the k=0 data well without implementing BP? BP on k=0 graph is optimal for k=0 data, so the transformer that was trained on lots of k=0 data would necessarily behave like BP on individual inputs, which (without necessarily implementing BP) would make similar predictions to BP on out-of-sample data as well.\\n\\nW2.2 In the supervised root prediction task, the authors write that (line 314-316) \\u201cWe interpret this as a consequence of the weaker correlations between distant tokens\\u2014and therefore the lower signal-to-noise ratio during learning\\u2014that must be resolved to match the BP prediction.\\u201d Since the task is to predict the root, could the authors elaborate more on why they believe the weaker correlations among **tokens within a sequence** is causing difficulty for learning? An alternative interpretation is that this is due to the weaker correlation between the **root** and the **entire sequence** of tokens.\\t\\n\\nW3. Following up on the BP construction, could the authors elaborate more on the leaf-to-root passing? In particular, what do the $r$\\u2019s represent? It looks like all $r$\\u2019s are initialized to uniform distribution, and they each get updated with the same formula (eqn 22), so would $r^{(a,m)}_i$ ever be different from $r^{(a\\u2019,m)}_i$ for $a \\\\neq a\\u2019$? A walkthrough of the formulas on a minimal example with small $l$, and $q$ may be helpful here.\", \"other\": \"Did the authors mean to refer to figure 3 instead of 1c on lines 337? In the caption of figure 1c it says it\\u2019s for MLM instead of root classification, and the scale of the x-axis suggests it\\u2019s MLM too.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Re:\", \"comment\": \"We thank the reviewer for acknowledging the role our work could play in understanding which parsing-related inference algorithms can be feasibly approximated with a transformer, and for suggesting a direction they believe could be worth exploring. At the moment, as mentioned in our conclusions, we are indeed continuing our research in the direction of reintroducing the variability of the parsing tree topology, and hope to make progress in understanding in a detailed fashion when and how the transformer can still approximate the exact output. We agree with the reviewer that our work provides a strong baseline for these further explorations.\\n\\nOn the other hand, we invite the reviewer to consider also the other contributions we are providing, although they might not align with the research questions they find most interesting. We are contributing to different lines of research:\\n* Finding exact embeddings of inference algorithms in neural network computation: e.g., [1] does so without any experimental evidence, [2] proposes an unfeasible scaling for the network parameters.\\n* Understanding which inference problems related to parsing can be solved approximately optimally: e.g., the reviewer argues that [3] claims cannot be correct, and [4] shows that the approximation degrades with high ambiguity. \\n* Mechanistic interpretation of the transformer computation: e.g. [5, 6, 7] attempt to interpret the computation in tasks where the algorithms for obtaining the correct answer are available, through attention map analyses and probing experiments.\\n* Understanding the role of the learning dynamics, and how different components of the data correlation structure are discovered in time: e.g., [8, 9] show similar stair-case phenomenologies, but with different data models.\\n* Understanding the role of hierarchical correlations: e.g. [10, 11] show how they are absorbed and how they shape the learning of deep networks. \\n\\nMany of these works have been accepted at major conferences. In our work, we strive to propose new evidence and make progress in each of these research directions. However, the reviewer believes the content presented in our work is **not** even **weakly acceptable** in ICLR. Given that the content and the claims of the paper have been accepted by the reviewer, we can only hope they can reconsider their judgment. \\n\\n**Bibliography**\\n\\n[1] Song Mei, \\u201cU-Nets as Belief Propagation: Efficient Classification, Denoising, and Diffusion in Generative Hierarchical Models\\n\\n[2] Zhou et al. \\u201cDo Transformers Parse while Predicting the Masked Word?\\u201d\\n\\n[3] Allen-Zhu et al. \\\"Physics of language models: Part 1, context-free grammar.\\\"\\n\\n[4] Khalighinejad et al. \\\"Approximating CKY with Transformers.\\\"\\n\\n[5] Rende, et al. \\\"Mapping of attention mechanisms to a generalized Potts model.\\\"\\n\\n[6] Behrens et al. \\\"Understanding counting in small transformers: The interplay between attention and feed-forward layers.\\\"\\n\\n[7] Zhong et al. \\\"The clock and the pizza: Two stories in mechanistic explanation of neural networks.\\\" \\n\\n[8] Bardone, et al. \\\"Sliding down the stairs: how correlated latent variables accelerate learning with neural networks.\\\"\\n\\n[9] Sz\\u00e9kely et al. \\\"Learning from higher-order statistics, efficiently: hypothesis tests, random features, and neural networks.\\\" \\n\\n[10] Cagnetta et al. \\\"How deep neural networks learn compositional data: The random hierarchy model.\\\"\\n\\n[11] Cagnetta et al. \\\"Towards a theory of how the structure of language is acquired by deep neural networks.\\\"\"}", "{\"summary\": \"This paper evaluates encoder-only transformers on a synthetic task, where data is sampled from a complete binary tree-structured generative model of fixed depth $\\\\ell$. There is a knob $k$ that determines at which layer that subtrees are forced to be conditionally independent. This generative process is equivalent to a PCFG, although it is framed mostly in terms of factor graphs and belief propagation. The authors find that transformers can predict the root node type given the leaves with high accuracy. They find that they can do MLM prediction with high accuracy, and that using MLM as a pretraining step improves sample efficiency for root prediction. They analyze attention patterns and see that they attend in a hierarchical fashion as expected.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Some of the experimental results are quite interesting. For example, Fig 4 shows the expected attention patterns, and Sec 3.4 is interesting in that it shows that MLM pretraining improves sample efficiency.\", \"weaknesses\": \"Although the paper includes some interesting visualizations, I am not quite convinced that the contributions of this paper are particularly novel or rise to the level of a full ICLR paper. At times I also found the paper difficult to read, and its claims unclear.\\n\\n1. In terms of novelty, there seems to be significant overlap with Allen-Zhu & Li (2023), and the experiments seem to be a simpler case of that paper. Like this paper, Allen-Zhu & Li (2023) trained transformers on data generated from CFGs of fixed depth and showed that the the transformer layers learned to attend to constituents as expected.\\n1. In terms of the significance of the contribution, independently of the question of whether it is novel, this is a very simple synthetic task that seems a bit contrived so that transformers are successful on it (an issue that is also in Allen-Zhu & Li (2023)), and it is not clear that we learn very much about transformers from these experiments. The strings in the training data are all of the same length, and the depth of the underlying parse tree is also fixed and does not exceed the number of transformer layers. It is not surprising that a transformer encoder with the same number of layers as the underlying parse tree can learn to mimic the structure of the underlying complete binary tree. It would be more interesting to test the transformer on a CFG with parse trees of varying depths. Do we see similar behavior, and does the fact that the number of layers is finite matter then?\\n1. In terms of clarity, it is not clear at the beginning of the paper what its primary goal is. Is the paper primarily about proposing a new data model, and if so, what is the purpose and significance of $k$? Are you primarily interested in analyzing the transformer architecture, and if so, how will the analysis on this synthetic task help us understand the behavior of transformers on real tasks such as natural language?\\n1. One of the main claims of the paper is that the transformer learns to implement a belief propagation algorithm, but I don't see significant evidence that shows that it is learning to implement BP vs. another algorithm, e.g., some version of the inside algorithm. I don't think the analysis of accuracy on OOD examples and attention patterns rules this case out.\\n1. Major figures supporting the paper's claims are only in the appendix (Fig 7, App C.3 and C.4).\", \"questions\": \"1. Intuitively, what \\\"knob\\\" does the filtering parameter $k$ represent? Is it the case that lower $k$ result in more long-range correlations in the data?\\n1. 035: Another relevant paper: https://arxiv.org/abs/2305.02386\\n1. Is there a particular reason why you chose to frame the paper mostly in terms of factor graphs and belief propagation, rather than CFGs and standard parsing algorithms (e.g., the inside algorithm)? Is there an advantage to presenting it this way? Is there an advantage in time complexity vs. using a CFG parsing algorithm?\\n1. 119: What would the equivalent PCFG be, incorporating the depth constraint and $k$?\\n1. 133: What is $\\\\mathcal{O}_a$? What is $q$? This part is very unclear to me.\\n1. 144: It's not clear to me what this means. Can you express this in equations?\\n1. Can the root always be uniquely determined by the input symbols? According to my understanding, the underlying CFG can be ambiguous. How is it possible to get 100% accuracy?\\n1. How does the BP algorithm described in the main text relate to the experiments? In what way is it used? I don't think this is stated explicitly.\\n1. Fig 3: Why do you report validation accuracy but not test accuracy? Why does the accuracy go up and then down? Did you not use the best checkpoint when evaluating on OOD data?\\n1. Why use accuracy instead of perplexity for MLM? Since the CFG can be ambiguous, there isn't only one correct answer, right?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Re 2 (1/2):\", \"comment\": \"It is clear that there is a misunderstanding about the statements on **algorithmic equivalence**, and that we might have a stronger opinion on the explanation of the computational mechanism implemented by the transformer compared to the reviewer. We acknowledge that we are not proving that BP is exactly implemented by the transformer. However, we think we are not overstating this point in the paper, as we will clarify below. Below, we address the more technical points raised by the reviewer in his latest response.\\n\\n**About C1.**\\n\\nTo better specify our claims, in the **Our contributions** paragraph we claim:\\n\\n(reviewer\\u2019s **point 2**) *Transformers approach optimal performance \\u2026 in a calibrated way, by predicting probabilities that approximate those yielded by the BP oracle even on out-of-sample inputs \\u2026 evidence of equivalence in computation to the exact inference algorithm.*\\n\\n(reviewer\\u2019s **point 3**) *We find that the attention maps are compatible with a natural\\nimplementation of BP within the architecture, see Fig. 1(e). We verify this **affinity** through\\nprobing experiments, providing strong clues on how transformers learn from our structured data in \\u201cspace\\u201d.*\\n\\nThe evidence the reviewer points to (the referenced lines) explicitly supports **point 2**, i.e. that the transformer approximates BP at the input-output level (matches the distribution, end-to-end also on random inputs). The **computational equivalence** we claim is at the level of input-output association (please indicate a better phrasing if this sentence appears too strong).\\n\\nThe coincidence between the mappings provides a motivation for our mechanistic interpretation study, where we try and open up the computation of the transformer and see if we can find qualitative ingredients of the BP algorithm. And, indeed, we find that all the performed tests point to a similar computational structure as in BP, with successive combinations on increasing block sizes and with ancestry information becoming \\u201cavailable\\u201d at the correct layer, to be mixed with the information from the leaves. Importantly, this affinity emerges naturally, without being explicitly enforced at training.\\n\\nOn the other hand, we don\\u2019t fully understand why the reviewer is confident that \\u201ca model that solves the tasks in section 3 without BP will also yield the observed patterns\\u201d. For example, a large enough 2-layer network, trained to memorize **all** possible input-output mappings, will not produce the same patterns (and would not implement a similar sequential computation). Instead, if the task is to be solved in a sequential way, we do agree that the same mixing patterns must be employed, and this is in fact what we observe and state in our results.\"}", "{\"title\": \"Author response (part 3)\", \"comment\": \"**(Q3) Since the task is to predict the root, could the authors elaborate more on why they believe the weaker correlations among tokens within a sequence is causing difficulty for learning?** We thank the reviewer for raising this question, as it pushed us to look a bit more attentively at this issue and to provide a new (simpler) interpretation that will hopefully convince them as well. To answer their question on why we thought this was related to token-token correlations rather than sequence-root correlations, this is because the k=l case is actually the one which has the smallest mutual information between the sequence and the root, while it is the easiest to learn from Fig.7 in the appendix. Moreover, this mutual information should be strictly increasing as k decreases, while the sample complexity as a function of k appears to present a non-monotonic behavior, being the smallest for k=l and the second smallest for k=0, therefore we ruled out the correlations between the root and the entire sequence as the cause of this phenomenon. However we missed an important aspect of the training, which is singular to the k=0 case. The deterministic nature of the root inference problem for k=0 makes it unique for two reasons: the first is that the logits outputted from the network need not be calibrated, so the accuracy can reach the optimum without the transformer having fully implemented an algorithm equivalent to BP, whereas the relative weights of prediction must be well understood to match the optimal inference in the ambiguous k>0 cases, this is visible on Fig.7; the other is that this being said, matching the BP is also easier in the k=0 case because the training cross-entropy loss corresponds exactly to that computed with the true BP marginals (that are also delta distributed due to the determinism again) whereas in the k>0 cases the training loss does not guide explicitly to the BP marginals, this is visible by re-plotting Fig.7 with the Kullback-Leibler divergence between the network logits and the BP marginals instead of the accuracy. Bringing these two points together, we clearly understand why the k=0 case requires less samples than intermediate k. At the other end of the spectrum, it is understandable that the k=l case is the easiest, as it is implementable in a single feedforward layer, as it requires only a Naive Bayes classifier and not an implementation equivalent to full BP. The intermediate cases then appear more or less equivalent, as they require an implementation of BP while not being guided towards the correct marginals during training and not benefitting from the argmax which washes away approximations of the correct marginals in the accuracy.\\n\\n**(Q4) [...] what do the r\\u2019s represent? It looks like all r\\u2019s are initialized to uniform distribution, and they each get updated with the same formula (eqn 22), so would ri(a,m) ever be different from ri(a\\u2032,m) for a\\u2260a\\u2019?** We thank the reviewer for reading carefully our appendix, and spotting the absence of the base case of the recursion for the \\u201cr\\u201d messages (which we are now reporting). Moreover, we agree that the current explanation is insufficient to fully understand the role of \\u201cr\\u201d. We added a new paragraph, showing how the \\u201cr\\u201d recursion is obtained from the standard BP recursion, by conveniently playing with the traced indices.\\n\\n**(Q5) Refer to figure 3 instead of 1c on lines 337**: We thank the reviewer for spotting this mistake. However, we meant to refer to figure 1b. We fixed this typo in the revised version.\\n\\n**References:**\\n\\nBehrens, F., Biggio, L., & Zdeborov\\u00e1, L. (2024). Understanding counting in small transformers: The interplay between attention and feed-forward layers. In ICML 2024 Workshop on Mechanistic Interpretability.\\n\\nZhong, Z., Liu, Z., Tegmark, M., & Andreas, J. (2024). The clock and the pizza: Two stories in mechanistic explanation of neural networks. Advances in Neural Information Processing Systems, 36.\"}", "{\"comment\": \"I thank the authors again for additional clarifications about the claims being made and adding important details on the probing experiments in the updated version of the paper.\", \"c1\": \"I understand the authors have claims both about computational equivalence as well as some level of algorithmic equivalence between the learned transformer and BP, in the paper as a whole. I also agree with the authors that evidence in section 3 supports claims on computational equivalence. Nonetheless, alternatives to the BP algorithm exist. Take MLM as an example, which is the task focused on by the probing experiments. The authors state that a wide enough two-layer net could memorize input-output mapping but wouldn\\u2019t match the BP marginals. Why must this be the case? The training objective is MLE (log-loss), on samples drawn from the true distribution. As we increase the amount of training samples for the two-layer net, wouldn\\u2019t it eventually match the ground truth conditional distribution of the masked token given observed tokens, and thus match BP marginals, assuming the two-layer net is expressive enough?\\n\\nSuppose by argument of scarcity of training data this alternative is ruled out, there exist other alternatives still - running BP on factor graphs that have equivalent distributions but different graph compared to the ground truth binary tree structured factor graph. You can obtain such factor graphs by combining two or more small factors in the ground truth graph into a single larger factor, and/or by marginalizing out latent variables from the ground truth graph. At the extreme is a factor graph that only has a single factor which is connected to all sixteen observed variables, and which essentially just encodes the joint distribution with a single factor. Approximations to these solutions would behave similar to approximations to BP on the test data because they approximate the same ground truth distribution over $x_{1:16}$.\", \"c2\": \"Thank you for adding details on probing experiments. Given the evidence I agree with the author that the frozen encoder representations of a token at each layer contain increasing amounts of information about the $2^{l-k}$ block, and this information is useful to predicting the ancestors one layer up. However, a question is whether this information is simply the assignment to the corresponding block of variables (e.g. at the first layer the observed size-2 blocks, the second layer size-4 blocks, etc), or whether it is some some deeper embedding of it (e.g. the up- or downward messages of running BP given the observed values of the corresponding blocks). The strength of the evidence varies across the layers.\\n\\nFrom a difficulty perspective, learning to predict the upper layer variables (e.g. the root) using a two-layer readout that takes shallow embeddings of $x_{1:16}$ is arguably very difficult (since, in figure 3, even a four layer transformer takes more than $2^{14}$ sequences, the number of sequences used in probing, to become perfect at the root prediction task), and it is also more difficult than predicting lower layer variables from their smaller corresponding blocks. So the evidence for higher layer transformer encodings capturing easily decodable features of root, and not just $x_{1:16}$ seems strong. However, for lower level such as predicting the ancestor of a block of size 2 or 4, I wouldn\\u2019t be surprised if shallow concatenated word embeddings encoding the identity of the block (instead of the corresponding transformer encodings) is enough as input to a two-layer readout to learn to perfectly predict their ancestor, because while $2^{14}$ is not enough for root prediction of the entire block, it may very well be for predicting roots of smaller blocks. Thus it is actually surprising that the probe doesn\\u2019t do ~100% in figure 7 on predicting the level 3 ancestor using the first layer representation (Fig3, triangle@3). Could it be doing something different from pooling information from immediate neighbors (which is what BP would do)?\\n\\nOverall, while the paper presents several pieces of evidence that is compatible with a transformer that implements BP, I find the current evidence still insufficient to conclude whether 1) the transformer is approximating BP on the ground truth tree graph or doing something in-between BP and brute-force and 2) whether the observed behavior (e.g. marginal match / attention pattern / probing trends) in this paper would also arise on more complicated graphs, or even the same graph but with different transition matrices that allow for ambiguity. The particular structure of the transition matrix used in this study (for all levels of filtering) has the property that knowing both left and right child uniquely determines parent, and so many of the upward BP messages would effectively be computing this deterministic mapping from pair of children to parent (up to the filtered level), which is a very special case of BP messages.\"}", "{\"title\": \"Author response (part 2)\", \"comment\": \"**(W4) Evidence of BP vs other algorithms:** We appreciate this concern of the reviewer, and admit that further evidence, which we now provide in the updated version, allows us to more convincingly support our claim. Indeed, we have added in the revised version of our manuscript a more precise metric in the form of the Kullback-Leibler divergence between the BP marginals and the softmax (instead of argmax used for prediction) of the neural network outputs. We find that this quantity decreases following a staircase along the factorization levels during training, and eventually reaches small values in fully trained networks, showing that the \\u2018full\\u2019 (per-sample) output of the networks and BP match, and not just their accuracies. Nonetheless, we would like to stress several points that were perhaps insufficiently clear in the manuscript. In our data model, Belief Propagation is not simply an effective algorithm, it is the information-theoretically optimal oracle for any symbol inference task, whether it be the root or the leaves. As a result, matching its performance both in-sample (tested against the model it was trained on) and out-of-sample (tested against larger filtering levels) is a strong clue that the transformer implementation could be a close approximation of the exact algorithm. Moreover, it may be shown that the inside-outside algorithm is in fact identical to BP (Sato, 2007), when the topology of the parsing tree is given. Generic probabilistic CFGs may indeed be seen as tree-based graphical models with an additional probability distribution over the realization (i.e., topology) of the parsing tree. Finally, we would like to point out that the alignment between the output of the transformer and the optimal inference algorithm constitutes only part of the presented evidence. It has to be understood in conjunction with the \\u2018spatial\\u2019 organization of the attention maps, the interpretation of which is verified by our probing experiment at higher ancestry levels; this organization is compatible with the computational graph of BP. We feel that we have not managed to relate these points together convincingly in the current version of the manuscript, and we will address this issue by adding a comprehensive summary of our results, to paint a more complete picture of our analysis.\\n\\n**(W5) Major figures in the appendix:** Unfortunately, we cannot easily comply with this request by presenting all our evidence in the main text, given the stringent page requirement of ICLR. On the other hand, we believe that the results presented in Fig. 7 and the content of Appendices C.3 and C.4 provide secondary evidence in support of our claims. Fig. 7, for instance, shows that also transformers trained on filtered data models can reach optimal inference performance, which in our opinion is mostly a sanity check (optimal inference is already reached, as shown for the full data model, in Fig. 1). In a similar vein, Appendices C.3 and C.4 are consistent with our findings but do not lead to any independent conclusions.\"}", "{\"title\": \"Re 2 (2/2):\", \"comment\": \"**About C2**\", \"our_claims_on_the_affinity_of_transformer_and_bp_computation_are_based_on_all_the_evidence_presented_throughout_the_paper\": \"the equivalent input-output mapping, the presence of a compatible mixing pattern, and the sequential emergence of information about the ancestors of a leaf in the hidden representations of the corresponding token.\\n\\nOn the latter point, following the criticism of the reviewer, we decided to restructure also the writing of the probing experiments paragraph (see final revision), in order to clarify the weight of the evidence. We also repeated the experiments with more training samples for the probe (2^14) to achieve better accuracy and a cleaner plot. Since the available space in the main is limited, we moved the right panel to the appendix, and reserved some additional space for the following explanations: \\n\\n*\\u201cKeeping the encoder weights frozen, we investigate how much information about the ancestors of any leaf is contained in the successive hidden representations of the\\ncorresponding token. While in the exact embedding of BP the k-th level ancestor information must be available at layer k to iterate the recursion for the downgoing messages, the MLM training does not set such a requirement. To probe the encodings, we employ a special-\\nized two-layer readout for each encoder-layer/ancestry-level pair\\u2014 independent of the token position\\u2014trained on a supervised dataset with 214 examples. In Fig. 7, we show that the prediction accuracy is high on ancestors up to the same level as the probed layer and deteri-\\norates on higher levels of ancestry. Note that, unless the information about the entire block of 2\\u2113\\u2212k tokens is properly mixed in through the attention mechanism, a perfectly accurate prediction of the common kth level ancestor from a single token representation is impossible, as the mapping becomes non-deterministic. Moreover, the \\u201coverfitting\\u201d scenario, where the ancestors are reconstructed solely by the trained probes and the sequential reconstruction is an artifact, can be ruled out by considering the gap between the accuracies achieved from different layers\\u2014the relative comparisons are fair since the readouts are trained on the same datasets\\u2014, and by training the probes only on some positions\\u2014see Appendix D.6\\u201d*\\n\\nWhat was not clear from our previous writing, but is important to appreciate the result is that:\\n* The 1-hidden layer probes (hidden dim=64) are attached to single token embeddings\\u2014i.e., **no recombination can be performed**. \\n* The probes are trained with the same data and for the same amount of epochs between different layers\\u2014i.e., the **probes should not overfit more for one layer than the others**.\\n* We use the same readout for all positions, and in the appendix show that it needs not to be trained on all positions to retain effectiveness across positions \\u2014i.e., **position dependence of the probability distributions for the symbols cannot explain the sequential effect**.\\n\\nWe think these experiments do support the thesis that some trace of the BP computation can be found in the transformer activations.\\n\\nWe thank the reviewer again for their efforts and hope our work during this discussion phase can convince them to raise their score.\"}", "{\"summary\": \"The authors build a binary-tree CFG with a filtering mechanism that the nodes on layer k is sampled only conditioned on the root node.\\n\\nA transformer encoder is then trained to predict the root node. With k=0, the prediction is perfect. And the performance decreases as k increase. It is also shown that the model's performance on out-sample k exactly matches the performance from BP. The authors then conduct experiments on masked prediction (like BERT). Finally, the authors propose an exact implementation of the BP algorithm through the transformer computation.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"It's a novel viewpoint to study transformer learning from belief propagation.\\n\\nThe CFG construction with filtering is interesting.\", \"the_study_is_from_multiple_perspective\": \"prediction accuracy, probing, and manual construction.\", \"weaknesses\": \"I'm not sure whether the transformer accuracy matches that of belief propagation is surprising, could it be a natural consequence that the transformer is simply learning the \\\"optimal\\\" prediction function (which is the prediction made by belief propagation)?\\n\\nI think the writing of this paper can be improved.\\n\\nWhile the construction sec3.5 is interesting, it does not mean that the learned transformer is actually doing that (if I understand correct).\\n\\nI may consider raise my score if other two reviewers show strong interest.\", \"questions\": \"Line355 I did not quite understand why the accuracy would have a drop in the middle of training.\\n\\nAlso I hope the writing for section3.3 can be improved by giving more easy-to-understand intuition.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper trains Transformer models on a synthetic PCFG in order to compare their mechanisms with an exact solution through belief propagation. Their analysis demonstrates that the attention patterns behave hierarchically in an interpretable way when the layers match the actual tree depth, which matches the behavior of the exact solution. They also show that this hierarchical structure emerges gradually during training.\\n\\nAlthough most reviewers found some merit in the various results and appreciated many of the visualizations, overall, several criticisms remained. In particular, all reviewers remain skeptical of the claim that the BP algorithm is the *only* implementation compatible with their results; further confirmation is needed. Broadly, the authors map several of their visualizations to specific claims about the algorithm implemented by the model and they show that the algorithm can be computed by the Transformer model, but while their findings are compatible with these proposed algorithms, they are not strong evidence. Some reviewers are also skeptical of the generality of their findings, given the small models trained on a synthetic setting, although this complaint can be applied to most empirically rigorous work on the science of deep learning.\", \"additional_comments_on_reviewer_discussion\": \"Authors responded adequately to the requests for presentation improvement by clarifying their contributions in the paper and expanding their discussion of specific technical aspects such as the feasibility of a belief propagation implementation within the model. They clarified a number of initial points of confusion on the part of each reviewer, and it is very possible that future revisions of this paper will be treated better because of the clarifications and additional detail in their revision.\\n\\nThe authors also added several experiments that continued to backup the claim that this trained model could be implementing BP, but none of these experiments guaranteed that this was the only possible algorithm.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Author response (part 3)\", \"comment\": \"To address the referee\\u2019s questions:\\n\\n**(Q1) What \\\"knob\\\" does the filtering parameter k represent?** In simple words, k>0 creates a shortcut in the hierarchy of the generative tree (see Fig. 1). When we filter out the upper levels, the root directly generates 2^(k) ancestors, independently one from the other. This means that strong correlations, induced by the remaining (l-k) unfiltered layers, survive only within blocks of size 2^(l-k). Therefore, (as suggested by the reviewer) the lower the parameter k, the stronger the longer-range correlations in the data.\\n\\n**(Q2) Other relevant paper Khalighinejad et al. (2023).** We were not aware of this work, indeed related to transformers on probabilistic CFGs. We thank the referee for bringing it to our attention, the reference was added in our introduction. Similarly to Allen-Zhu & Li (2023) and Zhao et al. (2023), the paper studies the ability of transformers to reconstruct parsing trees, but does not address (i) the learning dynamics, (ii) the implementation of the algorithm within the architecture and (iii) the existence of an exact transformer based implementation of the classic algorithm. This additional reference reinforces our interest in interpreting algorithm discovery within transformer architectures.\\n\\n**(Q3) Graphical models & BP vs CFG parsing algorithms.** This is an interesting point, which we can answer along two main motivations. The first is simplicity and tunability. As mentioned above, we set out to go beyond the existing works on CFGs on the question of mechanistic interpretation and to understand more precisely how an inference algorithm is learned and implemented in transformers. This required a simplification of the data model, and notably relying on a fixed topology, allowing us to introduce the filtering parameter\\u2013i.e. to tune the range of correlations in sequences. In this setting, the standard CFG terminology would be misleading, as our data model lacks the central feature of probabilistic CFGs: variable sequence length and the related existence of terminal and non-terminal symbols. The other motivation is that our setting is slightly more model agnostic. Indeed, while not tuned to describe natural language, probabilistic graphical models can describe many other objects, from protein sequences to random graphs. In this respect, our work could be used to make contact with markedly different problems where transformers could also be effective, for instance community detection in stochastic block models (Mossel et al., 2014) or planted graph coloring problems on graphs (Krzakala & Zdeborova, 2009), for which Belief Propagation is still an effective algorithm.\\n\\n**(Q4) What would the equivalent PCFG be, incorporating the depth constraint and k?** As already mentioned in W4 and Q3, it is unclear that a standard PCFG could be formulated to be equivalent to our model, since it would require one to remove the intrinsic ambiguity related to the topology of the parsing tree. Only in a fixed topology case (implying a fixed sequence length), the hierarchical tree can be cut at a specific level with the filtering procedure. In fact, in a general PCFG, the terminal symbols could be found at any level of the parsing tree, and their correlations to the other leaves is to be determined at inference time. In our model, the correlation between the elements of the sequence (i.e. the terminal symbols), is fully specified by the positions in the sequence. Therefore, generalizing the hierarchical filtering procedure to PCFGs is non-trivial. Moreover, we are not considering a separate dictionary for the non-terminal symbols and for the root symbol. While this choice may make little sense from the perspective of linguistics, it allows us to define a non-trivial root classification task, which would not be possible in a normal PCFG.\"}", "{\"title\": \"Author response (part 4)\", \"comment\": \"**(Q5) What is $O_a$? What is $q$?** We start by acknowledging that our presentation was not effective, we will rewrite this paragraph for clarity. To address the question: $q$ is the size of the dictionary of symbols that can be taken by any element of the sequence, and by any node on the ancestry tree starting from the root (see Q4). $O_a$ represented the set of possible children pairs generated by a parent symbol $a$ (i.e. the allowed production rules). What we mean by $O_a \\\\cap O_{a\\u2019} = \\\\emptyset$ is that the production rules we consider are completely distinct for each parent symbol, making our unfiltered model non-ambiguous (at odds with more general PCFGs). Given a pair of children, there can only be one possible parent in the layer above. $|\\\\cup_a O_a |=q^2$ on the other hand means that any possible children pair can be produced, and the rules are equal partitioned among the $q$ possible parent values. To summarize, our model is described by a transition tensor that has $q\\\\times q \\\\times q$ entries. It is then organized as q \\u2018slices\\u2019 populated by $q$ entries (the rest of the entries being zero), which are non-overlapping for each slice.\\n\\n**(Q6) l.144: It's not clear to me what this means. Can you express this in equations?** This is related to the point above. What we mean is that in the unfiltered cases, transitions in the tree are of the form $M(a \\\\to bc)$ where a is the parent symbol and b and c are the children, as shown by black factors in Fig.1a. By strongly correlated we mean that in each transition b and c are not drawn independently but as a pair. We hope this clarifies our statement. \\n\\n**(Q7) Can the root always be uniquely determined by the input symbols?** Yes, this is the case with our choice of non-ambiguous production rules in the unfiltered (full hierarchical) model (see Q5). Given a pair of children symbols, one can exactly infer their parent. Combining this for all pairs in the sequence and going up the tree, the root can be determined with certainty. This difference with PCFGs is central, and justifies our framing in terms of a probability distribution described by a factor graph rather than in terms of CFGs, see Q3. \\n\\n**(Q8) How does the BP algorithm described in the main text relate to the experiments?** Given any filtered/unfiltered hierarchical model, one can derive the associated BP algorithm (using explicit knowledge of the production rules and the corresponding transition rates), which represents the exact oracle for any inference task defined in the context of the graphical model. As such, it provides an information-theoretic bound for the accuracy of the reconstruction of hidden symbols (e.g, the root, or the masked tokens). This is what is shown by the black dashed line in Figs. 1b), 1c), 1d), Fig. 3 and Fig. 6; and by all dashed lines in Fig. 7. It also provides a comparison point in out-of-sample tests, i.e. used on data generated from different levels of filtering compared to the training data, which is shown by the colored lines on Figs. 1b), 1c), Fig. 3 and Fig. 6. As stated in the point above, the accuracy is trivially equal to 1 for the root classification given an entire sequence for fully hierarchical data.\"}", "{\"title\": \"Author response (part 1)\", \"comment\": \"We thank the reviewer for carefully reading our work and providing valuable feedback, which we will use to improve our presentation. We would first like to address the overall weaknesses identified by the reviewer:\\n\\n**(W1) It\\u2019s not clear how much the observations made in this work generalizes to larger models, and more complicated data distribution:** Considering somewhat prototypical tasks is customary and often necessary when the goal is to achieve a mechanistic interpretation of the transformer\\u2019s computation (Zhong et al., 2024; Behrens et al., 2024), and we believe it can still provide intuitive explanations that generalize to more complicated cases. For example, in our work, we can argue that the sequential discovery of higher levels of hierarchical correlations, involving longer-range token interactions, is likely to shape learning processes also in real-world NLP tasks. We clarified this point in the revised version. \\n\\n**(W2) The empirical evidence presented has alternative interpretations that haven't been ruled out:** We thank the reviewer for motivating us to follow up on their question with some new experiments, which will be included in the revised version. In particular, we compare the full output probability distributions of the transformer and BP (via a KL divergence in the main text, and the Spearman correlation coefficient and scatter plots in the appendix), as a function of the training epochs. Not only do we find that the model learns to approximate the exact BP marginals, but we also show that, at intermediate stages of learning, the transformer sequentially aligns its predictions with the filtered-BP marginals, recovering the same stair-case behavior observed in Fig. 1(c). This evidence reinforces our picture of the consecutive discovery of hierarchical correlation levels in transformers. We address the reviewer\\u2019s more specific questions on this topic below.\\n\\n**(W3) The construction for implementing BP on depth-l tree using a transformer of only l-layers could benefit from a bit more details, especially on the root-to-leaf message passing part:** We thank the reviewer for reading carefully our appendix and taking interest in this more challenging part of our work. We agree that the current explanation is insufficient to fully understand the role of \\u201cr\\u201d. We added a new paragraph, showing how the \\u201cr\\u201d recursion is obtained from the standard BP recursion, by conveniently playing with the traced indices. We address the well-spotted missing part of our recursion on \\u201cr\\u201d in the answer to the questions Q3 below.\"}", "{\"title\": \"Re:\", \"comment\": \"We thank the reviewer for engaging in the discussion. We would like the reviewer to clarify their point in concern C1, if possible, before we try to answer their further questions. It seems that the reviewer is stating that the transformer can:\\n* not only solve the tasks in section 3\\n* but obtain the same input-output mappings as BP, both on the training distribution and out-of-sample (even on random inputs)\\nbut with a completely different, unrelated computation. \\n\\nIs there some known example of a similar scenario, where two unrelated non-linear functions end up producing the same continuous outputs on all inputs, without this match being enforced explicitly?\\n\\nIn our understanding, for example, in standard situations if you train two neural networks on the same data, in the end they might align their outputs on the training data distribution. However, they will still provide different outputs on random data. \\n\\nIs the reviewer suggesting there exists a completely different way of performing exact inference on a tree? And more generally, what evidence would be sufficient to support the claim that an architecture implements algorithm X?\\n\\nWe thank again the reviewer, we will follow up and address the technical points raised above.\"}" ] }
F0TrRRKkQT
Takin-VC: Zero-shot Voice Conversion via Jointly Hybrid Content and Memory-Augmented Context-Aware Timbre Modeling
[ "Yang Yuguang", "Yu Pan", "Jixun Yao", "xiang zhang", "Jianhao Ye", "Hongbin Zhou", "Lei Xie", "Lei Ma", "Jianjun Zhao" ]
Zero-shot voice conversion (VC) aims to transform the source speaker timbre into an arbitrary unseen one without altering the original speech content. While recent advancements in zero-shot VC methods have shown remarkable progress, there still remains considerable potential for improvement in terms of improving speaker similarity and speech naturalness. In this paper, we propose Takin-VC, a novel zero-shot VC framework based on jointly hybrid content and memory-augmented context-aware timbre modeling to tackle this challenge. Specifically, an effective hybrid content encoder, guided by neural codec training, that leverages quantized features from pre-trained WavLM and HybridFormer is first presented to extract the linguistic content of the source speech. Subsequently, we introduce an advanced cross-attention-based context-aware timbre modeling approach that learns the fine-grained, semantically associated target timbre features. To further enhance both speaker similarity and real-time performance, we utilize a conditional flow matching model to reconstruct the Mel-spectrogram of the source speech. Additionally, we advocate an efficient memory-augmented module designed to generate high-quality conditional target inputs for the flow matching process, thereby improving the overall performance of the proposed system. Experimental results demonstrate that the proposed Takin-VC method surpasses state-of-the-art zero-shot VC systems, delivering superior performance in terms of both speech naturalness and speaker similarity.
[ "zero-shot voice conversion", "hybrid content encoder", "memory-augmented context-aware timbre modeling", "conditional flow matching" ]
https://openreview.net/pdf?id=F0TrRRKkQT
https://openreview.net/forum?id=F0TrRRKkQT
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sKKqBTLj9V", "qUKKPy1JJm", "fokA3H3ReP", "U5b4TRkiTa", "FV7SDsxN9C" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730621949190, 1730634980352, 1729500156348, 1732500724338, 1730472693703 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7451/Reviewer_Jb2q" ], [ "ICLR.cc/2025/Conference/Submission7451/Reviewer_wfZ4" ], [ "ICLR.cc/2025/Conference/Submission7451/Reviewer_R8GT" ], [ "ICLR.cc/2025/Conference/Submission7451/Authors" ], [ "ICLR.cc/2025/Conference/Submission7451/Reviewer_wkqP" ] ], "structured_content_str": [ "{\"summary\": \"This work appears to be backed by substantial engineering efforts, enhancing zero-shot voice conversion performance through two main techniques: Jointly Hybrid Content and Memory-Augmented Context-Aware Timbre Modeling.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The authors provide a clear and thorough explanation of the Jointly Hybrid Content and Memory-Augmented Context-Aware Timbre Modeling techniques.\\n2. The proposed model achieves competitive results when compared to existing baselines.\", \"weaknesses\": \"1. The field of zero-shot voice conversion (VC) is somewhat niche, which may limit its broader impact. Moreover, with the impressive progress in related tasks such as zero-shot TTS (voice cloning), the potential for further development in zero-shot VC may be constrained.\\n2. The novelty of the approach is limited. The Jointly Hybrid Content module is essentially a combination of HybridFormer and WavLM representations, while Context-Aware Timbre Modeling via Cross-Attention is a common technique. Additionally, the Memory-Augmented Timbre Modeling component is a relatively simple block and lacks significant theoretical contributions.\\n3. Differences in the training dataset could introduce data bias.\\n4. It would be beneficial to compare the proposed model against more recent baselines, such as CosyVoice-VC and Seed-VC. Although I understand that comparing with works from the last few months might be unfair, demonstrating that Takin-VC significantly outperforms contemporaneous methods would showcase its potential.\\n5. The ablation studies do not demonstrate notable improvements in terms of audio quality or robustness.\", \"questions\": \"nothing.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposed TAKIN-VC to deal with zero-shot TTS by hybrid content encoding and memory-augmented context-aware timbre modeling. The experimental outcomes presented demonstrate that TAKIN-VC outperforms existing state-of-the-art voice conversion (VC) systems, highlighting its effectiveness and potential in enhancing speech synthesis technology.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Figure 5 effectively illustrates the speaker similarity between the ground truth and the converted speech, providing clear visual evidence of the model's capability to maintain speaker characteristics.\\n2. Both subjective and objective evaluations demonstrate the effectiveness of the proposed method in achieving high-quality voice conversion. These results provide a comprehensive understanding of the model's performance from multiple perspectives.\", \"weaknesses\": \"1. Given that the output of the task is audio, it would be beneficial to provide some demonstrations either through a demo page or supplementary materials to better showcase the audio results.\\n2. Some notations in Sec. 3.1 is not easy to understand and remember, which might hinder the clarity and accessibility of the paper.\\n3. In the hybrid content encoder, SSL features are merely quantized, raising concerns about whether the timbre information contained could still potentially leak. The paper notes that PPG features can offer disentangled timbre information for SSL features; therefore, a comparative analysis regarding whether to use PPG or the use of PPG versus the use of more purified semantic information (such as text) is expected to clarify the effectiveness and security of the feature handling. The results of the ablation study can only demonstrate that using PPG is not enough.\\n4. The context-aware timbre modeling seems important, while the analysis is relatively inadequate. It would be insightful to investigate what the performance implications would be if only a global timbre embedding from the VP model were used, thereby understanding the significance of local context in timbre adaptation more deeply.\\n5. It would be better to explain more about the large-scale dataset and show some examples of this dataset.\\n6. The input of the memory module is x_sctt in Figure 4, but according to the description in Sec. 3.2.2, the input of the memory module should be x_ref.\", \"questions\": \"See the above weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a zero-shot voice conversion method that enhances audio quality by addressing timbre leakage. The approach shows improvements over existing methods through extensive experiments. However, its novelty is limited, relying on established techniques. Additional clarity and further ablation studies are recommended.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper proposes a zero-shot voice conversion method that uses context-aware timbre features and a multi-stage pertained content encoder to solve the timbre leakage problem and enhance the naturalness and quality of the converted audio. They also include comprehensive experiments, including ablation studies and evaluations on small and large datasets to show improvements over the state-of-the-art (SOTA) in both subjective and objective metrics, which suggests that the method can improve both speaker similarity and the naturalness of converted speech.\", \"weaknesses\": \"1. While the integration of neural codec-based training, context-aware timbre modeling, and flow matching is interesting, the novelty of the contribution seems limited, with much of the work focusing on combining well-established techniques rather than introducing fundamental innovations.\\n2. The authors mention they improve real-time performance but lack further analysis and experiments to show how they improve it.\\n3. The figures can be improved to stand in path with the used notations and modules in this paper, or it can make the model architecture hard to understand.\\n4. The paper takes cross-attention between the content and timbre feature as one of the contributions. However, many existing works like GPT-SoVITS have already carried out methods similar to it.\\n5. Ablation studies should be carried out on the neural codec training.\\n6. The author should give more explanation on neural codec training like the objective function and when the neural codec training takes place (i.e. perform neural codec training first or together with other training processes.)\", \"questions\": \"1. How to prevent overfitting during neural codec training?\\n2. For the neural codec involved in Mel and FBank features besides the encoder, how to directly generate the speech from the fused features of WavLM and HybridFormer in Figure 2?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The Takin-VC method is proposed, further enhancing the naturalness of the results while ensuring timbre conversion. An encoder is designed that integrates PPG and SSL features, compensating for the shortcomings of both types of features and better extracting semantic content. The extraction results of the Mel spectrogram and speaker features are fused as key-value pairs. The target timbre is integrated using a cross-attention mechanism. A memory-augmented module is utilized to generate conditions for the CFM, further improving timbre performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The effectiveness of the VC task is enhanced by combining some existing models or methods, with the most notable feature being the design of the encoder. The writing style is good, and the narrative is clear. The entire method is described with sufficient accuracy. The decoupling operations of timbre and semantic content prior to CFM hold certain reference significance in the current context of rapidly advancing large models.\", \"weaknesses\": \"The generation of key-value pairs utilizes reference Mel and VP features. It needs to be supplemented in the ablation experiments to determine whether the results would change if only VP features were used. Additionally, the paper mentions that SSL features do not clearly decouple timbre information. Currently, the authors refine semantic information through VQ operations, but many studies suggest that VQ does not guarantee complete decoupling of timbre and semantic content. It is recommended that the authors further test the effectiveness of VQ in their experiments.\", \"questions\": \"All my questions have been raised in the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
F0K0zxi62U
EVGAP: Egocentric-Exocentric Video Groups Alignment Pre-training
[ "Peiyao Wang", "Haibin Ling" ]
Aligning egocentric and exocentric videos facilitates the learning of view-invariant features, which significantly contributes to video understanding. While previous approaches have primarily focused on aligning individual ego-exo video pairs, our method extends this concept by aligning groups of synchronized egocentric and exocentric videos. This strategy enables the model to capture more comprehensive cross-view relationships across densely captured viewpoints, enhancing its capacity for robust multi-view understanding. Therefore, we develop a pipeline based on contrastive learning for \textbf{E}gocentric-exocentric \textbf{V}ideo \textbf{G}roups \textbf{A}lignment \textbf{P}re-training (EVGAP). Our method introduces several key innovations: 1) a novel video pre-training paradigm that extends alignment from ego-exo video pairs to ego-exo video group alignments; 2) an innovative two-step training process that leverages the abundant ego-exo video pair data to support the learning of ego-exo video group alignments, transitioning from sparse to dense viewpoints; and 3) the application of auxiliary losses to progressively align videos from different perspectives. Extensive ablations illustrate the effectiveness of our approach in single-view and multi-view downstream tasks. We also find that our approach facilitates the tasks inluding novel views. The codes will be available upon acceptance.
[ "multiview video", "view-invariant pretraining", "view alignment", "ego-exo pair alignment" ]
https://openreview.net/pdf?id=F0K0zxi62U
https://openreview.net/forum?id=F0K0zxi62U
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vk1sB864EU", "oZ98eo3SAT", "o8Rd3jQag2", "NWJdGjIZkO", "4s7eubhHSS" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730607346445, 1730856461746, 1729639998237, 1730879798767, 1732051527725 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5997/Reviewer_JGNn" ], [ "ICLR.cc/2025/Conference/Submission5997/Reviewer_Fg36" ], [ "ICLR.cc/2025/Conference/Submission5997/Reviewer_f9HT" ], [ "ICLR.cc/2025/Conference/Submission5997/Reviewer_4Db2" ], [ "ICLR.cc/2025/Conference/Submission5997/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes egocentric-exocentric video groups alignment pretraining. The motivation is to align groups of ego and exo videos, in contrast to aligning pairs of ego and exo videos. The authors propose a two-step pretraining strategy and demonstrate improvement on two downstream tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The motivation is valid. It makes sense to me to explore the idea on utilizing the multi-view videos to form ego and exo video groups (compared with just enforcing ego-exo pair alignment with multi-view videos).\", \"The experiments show performance gain of EVGAP. The novel view inference setting (section 4.5) is interesting.\"], \"weaknesses\": \"+ The paper is not very well-written, a few clarifications are needed:\\n1. L158: How are the scenes defined and identified? Also, is the scene definition incorporated in stage-I training?\\n2. L161: The objective of EVGAP is to \\\"align and pair\\\" ego-exo video groups. However, this description is overly simplistic. Please expand on what \\\"align and pair\\\" specifically entails in this context. Provide more formal definitions or explanations to clarify the intended meaning.\\n\\n\\n+ Experiments: evaluation setting is weak. The authors only conduct experiments on two datasets (Charades-Ego and Assembly101) and downstream tasks are only on Assembly101. Moreover, while the authors review a few ego-exo view-invariant works (L37-38), none of them is implemented as a baseline for comparison with EVGAP. The experiments are only about evaluating whether EVGAP gives additional performance gain on top of the task-specific approaches. The gain is expected since EVGAP benefits from more data in the pretraining. However, the real comparison should be about how different ego-exo feature learning approaches perform on these downstream tasks. I believe adding those baselines is essential. \\n\\n\\n+ Method novelty is limited. I feel the major claim of the paper is to better utilize the multi-view ego-exo videos, extend a regular contrastive loss to account for groups of ego and exo videos. The contribution is incremental from my understanding. Moreover, the claim of using groups and objective 2 is better than using pairs is not thoroughly evaluated. For example, in Table 4 and 5, the authors should report results of aligning all possible views as pairs, to demonstrate the superiority of the proposed objective. Otherwise, the improvement could be understood from introducing multi-view data in pretraining.\\n\\nOverall, my concern with the paper is its limited novelty and weak evaluation. Specifically, I feel that the biggest claim made in the paper is not thoroughly evaluated, and there is a noticeable absence of baseline comparisons.\", \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This manuscript studies leveraging synchronized first-person-view (egocentric) videos and third-person-view (exocentric) videos for self-supervised learning. Specifically, the manuscript proposes to align between a group of ego-videos and a group of exo-video in contrastive learning framework, in contrast to the instance-level contrastive learning.The method is termed EVGAP. EVGAP is developed in two stages, in the first-stage, the standard contrastive loss is applied between paired ego-/exo-videos, in the second stage, the model is then fine-tuned on the group alignment contrastive loss. The pre-training is conducted on Assembly101 and Charades-Ego datasets. Two downstream tasks, temporal action segmentation and temporal action anticipation, are evaluated to show the effectiveness of the proposed methods and ablate the design choices.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is structured well and the experiments are executed completely. I found it easy to follow the proposed idea.\", \"weaknesses\": [\"The proposed method is conceptually simple. The method depends a lot on paired dense egocentric/exocentric videos for the pre-training, which in my opionion is one major drawback of the proposed method. As it is expensive to collect the synchronized videos, so the scalability of proposed method is questionable.\", \"It is not clearly to me what is the intuition behind contrastively aligning two group of synchronized videos from different view points has benefits over aligning paired video instances from two views.\", \"In general, the performance difference in main results and in ablation studies are marginal - the difference is less than 2 percent. sometimes comparison between two methods show less than 1 percent difference. For instance in Table 1, the difference on the averaged metric in (c) and (d) are pretty marginal, which does not quite indicate the effectiveness of two stage training design in my view. Considering other factors in the experiments, e.g. having extra layers in proposed model, domain gap in the baseline methods, etc. more significant results are needed to justify the effectiveness of proposed method and designs.\", \"Some related works [1][2][3][4] that could leverage ego-exo videos for pre-training under the setup of this manuscript, though discussed in the introduction and related work section, are not included in the baseline results. Would those baselines perform better/worse/equally well on the benchmarks this manuscript studied?\", \"Overall I think this work is not well-motivated, the proposed method has major limitations and the conducted experiments do not show significant improvements on downstream tasks.\", \"[1] Wang, Q., Zhao, L., Yuan, L., Liu, T. and Peng, X., 2023. Learning from semantic alignment between unpaired multiviews for egocentric video recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 3307-3317).\", \"[2] Xue, Z.S. and Grauman, K., 2023. Learning fine-grained view-invariant representations from unpaired ego-exo videos via temporal alignment. Advances in Neural Information Processing Systems, 36, pp.53688-53710.\", \"[3] Sigurdsson, G.A., Gupta, A., Schmid, C., Farhadi, A. and Alahari, K., 2018. Actor and observer: Joint modeling of first and third-person videos. In proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7396-7404).\", \"[4] Ardeshir, S. and Borji, A., 2018. An exocentric look at egocentric actions and vice versa. Computer Vision and Image Understanding, 171, pp.61-68.\"], \"questions\": [\"Does the number of videos (N) in one group matter for the performance of the proposed method? When N = 1, the proposed method degrades to constrastively train with paired ego-/exo-videos. This seems to be one important baseline to compare against and it is missing from the paper.\", \"Would un-synchronized video groups work in the proposed framework?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a novel two-step pre-training process for ego-exo video alignment, aimed at improving multi-view video understanding. The approach starts with traditional ego-exo video pair pre-training, followed by an extension to grouped ego-exo videos alignment, allowing the model to capture denser cross-view relationships. The model employs contrastive losses at each layer as auxiliary supervision to enhance training efficiency. The method is evaluated on the Assembly101 dataset, demonstrating its effectiveness in improving downstream tasks like temporal action segmentation and action anticipation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Originality: The paper presents a novel extension of ego-exo video alignment by moving from individual pair alignments to group-based alignments. This shift is supposed to capture richer inter-view relationships, which is a creative and meaningful advance over existing approaches.\", \"quality\": \"The experimental results are supported by ablation studies, showing improvements on certain evaluation metrics over the chosen base model, but not the state-of-art for the same task, such as ASQuery.\", \"clarity\": \"The method is clearly described, with well-organized sections explaining the motivation, approach, and results. Figures, such as the visualizations of video alignment processes, effectively illustrate key concepts.\", \"significance\": \"The contributions have the potential to improve video understanding, particularly for tasks involving multi-view data.\", \"weaknesses\": \"Data Dependency: The reliance on synchronized ego-exo video pairs imposes a strong constraint on the data, limiting its applicability to datasets where such synchronization is not available.\", \"mixed_results_in_multi_view\": \"The performance of the two-view setting ((d) Base + Step1 + Step2) does not consistently outperform the single-view setting in key metrics (e.g., F1 and Edit scores in Table 1). This contradicts the expectation of multi-view alignment improving overall video understanding. Additionally, it is reasonable to expect that extending the training for more epochs could improve performance in some cases. To ensure a fair comparison, (b) and (c) should be trained for the same total number of epochs or training time as (d). This would demonstrate whether the improvement observed in (d) is due to the proposed method, rather than simply the result of extended training in earlier steps.\", \"baseline_comparison\": \"The paper compares the proposed method against C2F-TCN, a model not specifically designed for view-invariant features learning. To fully assess the effectiveness of the approach, comparisons with state-of-the-art models in the same domain would provide a stronger baseline, such as the methods listed in section \\\"1. INTRODUCTION\\\" or a method named ASQuery, which reports better F1@10 performance on Assembly101.\", \"limited_dataset_coverage\": \"The evaluation is restricted to the limited validation split of Assembly101 dataset, given the model itself trained with the Assembly101 dataset. Including results on additional public datasets like CMU-MMAC, H2O, or Ego-Exo4D would help generalize the findings and demonstrate the robustness of the method across diverse settings.\", \"questions\": \"In Figure 1 (a), the top labels all describe \\\"the ego video in scene\\\" for four items. Should two of these items be labeled as \\\"exo\\\" videos instead of \\\"ego\\\"?\\n\\n\\nSection 3.4 discusses the auxiliary loss but does not explain how the values of \\u03b1 and \\u03b2 are selected for each layer during the first and second steps of training. Can you provide more detail on how the weight 0.2, noted in Section 4.2, was chosen?\\n\\n\\nIn formula (1), what is the meaning of the two logarithmic terms? Specifically, what distinguishes the first log from the second in terms of its contribution to the loss?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a video pretraining method that aligns a group of ego-centric and exocentric videos. Specifically, it extends previous contrastive learning objective of aligning synchronized video clips of ego-centric and exocentric together by adding multi-view of both ego-centric and exocentric clips into the alignment. A two-stage pretraining strategy is proposed and layer-wise aux losses are added. Experiments are conducted on Assembly101, Charades-Ego datasets and show improvement over baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"\\u2022\\tAligning multi views of the same clips is straightforward and the proposed loss is easy to understand.\\n\\u2022\\tMultiple downstream tasks including temporal action segmentation, action anticipation, and novel view evaluation are considered.\", \"weaknesses\": \"\\u2022\\tAblation study did not include the effect of the additional aux losses\\n\\u2022\\tWhen comparing to SOTA methods in Table 2 and Table 3, the paper directly build upon previous methods, e.g LTContext, where extra layers and pretraining data are added, so it is unclear where the gain is coming from. \\n\\u2022\\tThe proposed method requires datasets of multi-view aligned videos of both ego-centric and exocentric, which is limited and costly to find.\", \"questions\": \"\\u2022\\tWhat is the additional gain of adding the aux losses, can authors give some numbers?\\n\\u2022\\tThe LTContext results in Table 2 is lower than the original paper reported. E.g. LTcontext paper reports 33.9 F1 where in table 2 it is 33.6, can authors give more details about this discrepancy?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
F0GNv13ojF
On Designing Effective RL Reward at Training Time for LLM Reasoning
[ "Jiaxuan Gao", "Shusheng Xu", "Wenjie Ye", "Weilin Liu", "Chuyi He", "Wei Fu", "Zhiyu Mei", "Guangju Wang", "Yi Wu" ]
Reward models have been increasingly critical for improving the reasoning capability of LLMs. Existing research has shown that a well-trained reward model can substantially improve model performances *at inference time* via search or best-of-N votes. However, the potential of reward models during *RL training time* still remains largely under-explored. It is currently unclear whether these reward models can provide additional training signals to RL training that uses sparse success rewards, which verify the correctness of solutions. In this work, we evaluate popular reward models for RL training, including the Outcome-supervised Reward Model (ORM) and the Process-supervised Reward Model (PRM), and train a collection of LLMs for math problems using RL by combining these learned rewards with success rewards. Surprisingly, even though these learned reward models have strong inference-time performances, they may only bring marginal improvements or even hurt RL *training*, producing worse performances than LLMs trained with the success reward only. We find that *training collapse* easily occurs in RL training when PRM simply serves as reward shaping in addition to the success rewards. Our further analysis reveals two issues that may lead to the sub-optimal performance. Therefore, we introduce two novel reward refinement techniques, including the **Clip** and the **Delta** mechanisms, to tackle the identified issues. We evaluate our techniques with multiple reward models over a set of 1.5B and 7B LLMs on MATH and GSM8K benchmarks, where both **Clip** and **Delta** consistently enhance RL training. Finally, we also demonstrate that with a carefully designed reward function, pure RL training without any additional supervised tuning can further improve all the evaluated LLMs, including the state-of-the-art 7B LLM Qwen2.5-Math-7B-Instruct on MATH and GSM8K benchmarks.
[ "Large Language Models", "RLHF", "PPO", "LLM for Reasoning", "Reward Design" ]
Reject
https://openreview.net/pdf?id=F0GNv13ojF
https://openreview.net/forum?id=F0GNv13ojF
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xreyZnscYt", "tRzfTdr4Pn", "sBVRIKGhnw", "ptW0KQWZZo", "o86yAUzspP", "n6xL5W23kl", "mFcSoh4KWV", "kmLJins1re", "kbhLrJQ1id", "ihfUEvMQPE", "dcnQMToJXc", "Wi5DE7WIIv", "SveHZGKGrR", "OUO34TvAyP", "NeyiJZ51MG", "Dk7ipWs4ff", "CrktnhlrMW", "BfS5KCLF9Z", "9L4WDRXe1r", "8DPb8hxQ2x", "7bvymkbD0C", "5i6qaIWQL4", "3iHpxCqaZb", "2uGwBquv5M" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732208075657, 1732208639077, 1730435973014, 1732208421537, 1730681130640, 1732208099655, 1732346202474, 1732208209172, 1732208582409, 1730659292986, 1733051932748, 1730442699173, 1733086587484, 1732549410892, 1732208266025, 1734921470452, 1730646235695, 1732350914026, 1732208388940, 1730698970186, 1737524124553, 1732208610142, 1732208059778, 1733094530345 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11435/Authors" ], [ "ICLR.cc/2025/Conference/Submission11435/Authors" ], [ "ICLR.cc/2025/Conference/Submission11435/Reviewer_ffHa" ], [ "ICLR.cc/2025/Conference/Submission11435/Authors" ], [ "ICLR.cc/2025/Conference/Submission11435/Reviewer_MaFz" ], [ "ICLR.cc/2025/Conference/Submission11435/Authors" ], [ "ICLR.cc/2025/Conference/Submission11435/Reviewer_7bHH" ], [ "ICLR.cc/2025/Conference/Submission11435/Authors" ], [ "ICLR.cc/2025/Conference/Submission11435/Authors" ], [ "ICLR.cc/2025/Conference/Submission11435/Reviewer_gCaQ" ], [ "ICLR.cc/2025/Conference/Submission11435/Reviewer_gCaQ" ], [ "ICLR.cc/2025/Conference/Submission11435/Reviewer_sMoF" ], [ "ICLR.cc/2025/Conference/Submission11435/Reviewer_7bHH" ], [ "ICLR.cc/2025/Conference/Submission11435/Authors" ], [ "ICLR.cc/2025/Conference/Submission11435/Authors" ], [ "ICLR.cc/2025/Conference/Submission11435/Area_Chair_fbvy" ], [ "ICLR.cc/2025/Conference/Submission11435/Reviewer_6ViF" ], [ "ICLR.cc/2025/Conference/Submission11435/Authors" ], [ "ICLR.cc/2025/Conference/Submission11435/Authors" ], [ "ICLR.cc/2025/Conference/Submission11435/Reviewer_7bHH" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11435/Authors" ], [ "ICLR.cc/2025/Conference/Submission11435/Authors" ], [ "ICLR.cc/2025/Conference/Submission11435/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response by Authors\", \"comment\": \"We thank the reviewer for the thorough assessment of our paper. Herein, we address the points raised by the reviewer:\\n## 1. Motivation and Explanation of the Delta Mechanism\\n> \\\"What is the motivation behind the delta mechanism? Why should we assign credit to a_t if the reward of a_{t+1} is lower?\\\"\\n\\n- We conduct additional case studies and theoretical analysis to better illustrate the motivation and the effect of the proposed methods. **Please refer to the Global Response and the revised paper for more details.**\\n- Our additional case studies in Fig. 3 (also [available here](https://i.postimg.cc/Gp1Cq89Y/case-study.jpg)) reveal the reward misspecification issue when directly using PRM rewards as dense rewards, which could make RL training mistakenly promote incorrect steps. \\n- **The Delta mechanism ensures the steps promoted by RL training are aligned with the PRM.** The effect of the Delta mechanism can be better illustrated through our additional theoretical analysis of SR+PR-Delta in Appendix E. We show the policy gradient of RL training with SR+PR-Delta can be given in a step-wise manner,\\n $$\\n \\\\nabla_\\\\theta J_{r}(\\\\pi_\\\\theta)=\\\\mathbb E_{q\\\\sim \\\\mathcal D,s\\\\sim \\\\pi_\\\\theta(\\\\cdot|q)}\\\\large[\\\\nabla_\\\\theta \\\\log\\\\pi_\\\\theta(s|q)\\\\cdot \\\\text{Correct}(q,s)+ \\\\alpha\\\\cdot\\\\underbrace{\\\\sum_{k=1}^{K-1}\\\\nabla_\\\\theta \\\\log\\\\pi_\\\\theta(s^{(k)}|q,p^{(k-1)})\\\\cdot r_{process}(q, p^{(k)})}_{\\\\text{Effect of the Delta mechanism}}\\\\large] +\\\\text{KL term}\\n $$\\n Thus, **the Delta mechanism optimizes the single-step PRM rewards**.\\n\\n## 2. Explanation of Why the Clip and Delta Mechanisms Can be Combined\\n> \\\"Is there any intuition on why both clip-and-delta should be applied to get a boost and none of them work well in isolation?\\\"\\n\\n- We clarify that both the Clip and the Delta mechanisms can enhance RL training with success rewards. From the ablation study, as shown in Table. 1, both SR+PR-Clip and SR+PR-Delta attain better sampling accuracy compared with Success Reward. By further applying the Delta mechanism over PR-Clip, PR-Clip-Delta brings a stable and precise improvement over SR.\\n- The proposed techniques are introduced to mitigate the identified issues in our case studies in Fig. 3. The Clip mechanism mitigates the intrinsic biases of the PRM. The Delta mechanism tackles the reward misspecification issue. **As these mechanisms focus on tackling different aspects, they can be readily combined to enhance RL training.**\\n\\n\\n## 3. Performance Gain of the Proposed Mechanisms\\n> \\\"Can we consider the gains by mixing the clip and delta mechanism are strong and not modest?\\\"\\n\\n- Yes, the gains by mixing the Clip and the Delta mechanisms are strong.\\n- Our ablation study in Table. 1 and training curves of all methods in Appendix A.2 (also [available here](https://i.postimg.cc/jd1HPLQM/training-curves.jpg)) shows that PR-Clip-Delta can achieve higher greedy accuracy and sampling accuracy than the baseline methods.\\n- The effectiveness of the proposed approaches can be better illustrated in Fig. 5 (or [available here](https://i.postimg.cc/qRTLfCDT/perf-improve.jpg)), where **using PR-Clip-Delta as dense rewards improves RL training with success rewards only across all evaluated LLMs.**\\n\\n## 4. Discussion on the Hackability of PRM\\n\\n> \\\"Your PRM is trained on automatic data. Is this possible that this hackability issue is because of the noise caused by the automatic procedure and would be avoided when trained on human-data or better PRM data generation (I understand better PRM data generation is itself a research question)?\\\"\\n\\n- We believe that training on human data or better PRM data can mitigate but not completely solve the identified issues. Intrinsic bias generally exists for learned reward models, not just limited to PRM. On the other hand, though the training data has noise caused by the automatic procedure, human data itself also has noise due to the diverse preferences of the labelers. In practice, we also find that a public PRM trained on human data (specifically, llemma-7b-prm-prm800k-level-1to3-hf) can also have intrinsic bias, assigning non-negligible and even high values to incorrect steps, as shown in Appendix F. We also believe that, if a better PRM is available, our approach can be further applied to enhance the performance of stronger LLMs through RL training.\\n\\nWe hope our response addresses your concerns, and we welcome any further questions or suggestions.\"}", "{\"title\": \"Response by Authors (Part III)\", \"comment\": \"## 5. Training Curves of PR-Normed and the Proposed Methods\\n> \\\"The paper argues that PR-Normed approach overfits and the performance degradation is severe. To see SR+proposed method algorithms do not suffer from the same problem, we need more information, e.g., learning curves of all algorithms.\\\"\\n- During early training epochs, PR-Normed shows a sign of overfitting and only achieves sub-optimal test accuracy compared with SR, as shown in the following table. RL training of SR+PR-Normed also suffers from significant performance degradation after Epoch 3.\\n\\n**Train/Test accuracy of SR vs. SR+PR-Normed across training epochs:**\\n| train acc./ test acc. | Epoch 1 | Epoch 2 | Epoch 3 | Epoch 4 | Epoch 5 |\\n| -------- | ------- | ------- | ------- | ------- | ------- |\\n| SR | 30.54 / **29.26** | 34.29 / **29.72** | **35.16** / **29.86** | **38.52** / **30.16** | **38.07** / **30.58** |\\n| SR+PR-Normed | **32.23** / 28.68 | **36.13** / 29.66 | 34.79 / 25.9 | 23.18 / 9.8 | 25.39 / 12.36 |\\n\\n\\nwhere \\\"a/b\\\" denotes training accuracy $a\\\\%$ and test greedy accuracy $b\\\\%$. Tested on MATH test set.\\n\\n\\n- To address your concern, we have provided the training curves of all algorithms in Appendix A.2 (Fig. 7 ~ Fig.11, also [available here](https://i.postimg.cc/jd1HPLQM/training-curves.jpg)).\\n\\n\\n## 6. Additional Ablation Study of the Reward Threshold $\\\\eta$\\n\\n> \\\"If we use PR-Clip itself, then it becomes an algorithm that prefers shorter generations as we increase $\\\\eta$. Do you have results on the performance on varying $\\\\eta$?\\\"\\n\\n- We thank the reviewer for requesting the ablation study of $\\\\eta$.\\n- In our experiments, by default we set $\\\\eta$ to be the average value of PRM rewards of all reasoning steps related to one question in a training batch. This choice could avoid explicitly tuning the optimal $\\\\eta$ for different PRMs.\\n- We conduct ablation study of the reward threshold $\\\\eta$ used in PR-Clip, as shown in Fig. 12 of Appendix A.2 and in the following table. Surprisingly, we find a fixed $\\\\eta$ brings more stable improvements and $\\\\eta=0.7$ obtains the best performance and can surpass RL training with SR.\\n\\n**Ablation study of $\\\\eta$:**\\n| | Epoch 1 | Epoch 2 | Epoch 3 | Epoch 4 | Epoch 5 |\\n| -------- | ------- | ------- | ------- | ------- | ------- |\\n| SR | 29.26 | 29.72 | 29.86 | 30.16 | 30.58 |\\n| SR+PR-Clip ($\\\\eta=0.2$) | 29.68 | 30.4 | 30.5 | 30.28 | 30.78 |\\n| SR+PR-Clip ($\\\\eta=0.7$) | 29.4 | 30.46 | **30.44** | **30.84** | **30.98** | \\n| SR+PR-Clip ($\\\\eta=$ mean) | **29.84** | **30.56** | 29.62 | 30.62 | 30.3 | \\n\\nTested on MATH test set. '$\\\\eta=$ mean' means that we set $\\\\eta$ to be the average value of PRM rewards of all reasoning steps related to one question in a training batch.\\n\\n## 7. Discussion on Preference of Generation Length\\n> \\u201cWhile in this paper the Delta (giving agent no preference on length of generation) or the Clip (giving agent preference on shorter generation) worked, it depends on the task; on the tasks where longer generations are advantageous, both modifications might not be beneficial. In that sense, the paper only evaluates on single task, and it would not be a robust assessment on the algorithm's performance.\\u201d\\n- We emphasize that the Clip and the Delta mechanisms are proposed to tackle the issues of simply applying PRM as reward shaping as analyzed in Sec. 4. The significant increment of length is a consequence of one of the identified issues. Please refer to Sec. 3 for more details.\\n- We suggest combining both the Clip and the Delta mechanisms in practice, which mitigates issues from different aspects and would not explicitly introduce length biases. Regardless of the generation length, as long as PRM can evaluate the quality of reasoning steps accurately, RL training could improve the accuracy of the LLM. \\n- We also argue that there is no general preference for the generation length in mathematical reasoning tasks. The optimal generation length often depends on the problem at hand. While some problems may benefit from detailed, step-by-step solutions to ensure clarity and accuracy, others might be better solved with concise, high-level reasoning. As such, there is no universal preference for shorter or longer generations, as both can be equally valid depending on the problem's requirements. \\n\\nWe hope our response addresses your concerns, and we welcome any further questions or suggestions.\"}", "{\"summary\": \"The paper explores how to design effective RL rewards models to improve LLMs in mathematical tasks.\\nIt evaluates two popular reward models - ORM and PRM - during RL training.\\nThe authors argue that using these reward models do not improve and even degrade performance during RL training; due to \\\"reward hacking\\\".\\nThe authors propose two ad-hoc reward refinement techniques, Clipping and Delta, and demonstrated their performance improvement.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper has positioned itself well as an NLP paper by referring a number of recent papers on LLM and reward models.\", \"The solution suggested by the paper effectively improves from previous baseline.\"], \"weaknesses\": [\"After indroducing ORM, the paper does nothing about it; since it is introduced and evaluated, we need at least an analysis on why it does not help.\", \"While the paper is suspecting the observed phenomenon as \\\"reward hacking\\\", I would say this is closer to the wrong usage of reward functions. As we have binary correctness label, and ORM and PRM try to get estimation of it with partial generation, all rewards will be nonnegative. In episodic settings without any discount, this naturally lead to the agents that favor long trajectory, i.e., long solution generation, regardless of whether its correct or not. The paper does not tell us about the hyperparameter choice of $\\\\alpha$, but with some large enough $\\\\alpha$, this may lead to agent preferring long wrong generations over short correct generations.\", \"The proposed method is, in my perspective, not novel. To solve sparse reward problems, the first go-to should be reward-shaping methods. the proposed delta mechanism can be understood as a potential-based reward shaping method where PRM is the potential function. The proposed clip-delta mechanism can be understood as a potential-based reward shaping method where PRM-clip is the potemtial function. The overall paper can actually be understood as using a reward shaping methods to handle sparse rewards of RLHF, and I believe the paper should position itself well against reward-shaping methods that have been extensively studied in the past.\", \"The paper argues that PR-Normed approach overfits and the performance degradation is severe. To see SR+proposed method algorithms do not suffer from the same problem, we need more information, e.g., learning curves of all algorithms.\", \"While in this paper the Delta (giving agent no preference on length of generation) or the Clip (giving agent preference on shorter generation) worked, it depends on the task; on the tasks where longer generations are advantageous, both modifications might not be beneficial. In that sense, the paper only evaluates on single task, and it would not be a robust assessment on the algorithm's performance.\"], \"questions\": [\"Why does ORM not suffer from the same problem?\", \"What's the difference between PRM and ORM: why PRM does help when ORM does not help?\", \"If we use PR-Clip itself, then it becomes an algorithm that prefers shorter generation as we increase $\\\\eta$. Do you have results on the performance on varying $\\\\eta$?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response by Authors (Part II)\", \"comment\": \"## 4. The Effect of the Clip and Delta Mechanisms\\n> \\u201cThe work also lacks direct examination of how these two methods work under the hood. For example, how often is the reward nonzero after clipping? i.e. Is this truly a dense reward? When delta is applied on top of clipping, the argument that the reward is bound no longer holds, because r_process(q, p_k+1) can be 0. So what's really the mechanism in this case?\\u201d\\n\\n- **Clip mechanism**: In our experiments, we set $\\\\eta$ to be the average value of PRM rewards of all reasoning steps related to one question in a training batch. After checking the training statistics, we find the ratio of nonzero clipped PRM rewards to be around $50\\\\%$, showing that the Clip mechanism **indeed provides dense rewards for RL training.** The clip ratio during training is provided in Appendix A.2.\\n- **PR-Clip-Delta**: Regarding PR-Clip-Delta, by Eq. 5, the Clip mechanism first produces a reward $r_{PR-Clip}(q,p^{(k)})=min(r_{process}(q,p^{(k)})-\\\\eta,0)\\\\in[-1,0]$ since $r_{process}(q,p^{(k)})\\\\in[0,1]$. For PR-Clip\\u2013Delta, by Eq. 6, the Delta mechanism generates $r_{PR-Clip-Delta}(q,p^{(k)})=r_{PR-Clip}(q,p^{(k)})-r_{PR-Clip}(q,p^{(k+1)})\\\\in[-1,1]$ for $k<K-1$, $r_{PR-Clip-Delta}(q,p^{(k)})=r_{PR-Clip}(q,p^{(k)})\\\\in[-1,0]$ for $k=K-1$, and $r_{PR-Clip-Delta}(q,p^{(K)})=0$. The return of PR-Clip-Delta is $\\\\alpha\\\\cdot r_{PR-Clip}(q,p^{(k)})+\\\\text{Correct}(q,s)$ for intermediate steps and **therefore both the return and the reward of PR-Clip-Delta are bounded**.\\n\\n## 5. Comparison between SR and SR+PR-Clip-Delta Across All Evaluated LLMs & The Improvement on Qwen2.5-Math-7B-Instruct\\n> \\u201cIn Table 2, the experimental results show pretty marginal (or, non-existent) improvements in greedy decoding, on the only valid RL baseline, Qwen2.5-Math-7B-Instruct. The other comparisons are pretty meaningless since we all know RL works & it's not the contribution of this paper.\\u201d\\n\\n- We clarify that our experiments present a comparison between SR and SR+PR-Clip-Delta across all evaluated LLMs in Fig. 5 of Sec. 5(also [available here](https://i.postimg.cc/qRTLfCDT/perf-improve.jpg)), where **using PR-Clip-Delta as dense rewards improves RL training with success rewards only across all evaluated LLMs**. We also apologize for the misleading effect caused by the absence of training results of PPO w. SR in the main table (Table 2). We have updated Table. 2 (also [available here](https://i.postimg.cc/v8xjwWZJ/main-table.jpg)) to explicitly include RL training results using success rewards for greater clarity. \\n- **Improvement on Qwen2.5-Math-7B-Instruct:** On Qwen2.5-Math-7B-Instruct, adding PR-Clip-Delta as dense rewards also enhances the **sampling accuracy** and the **Pass@16** accuracy over RL training with success reward only. This indicates that the model learns better reasoning skills. Below is the detailed performance comparison:\\n\\n\\n**Accuracy on MATH test set of SR vs. SR+PR-Clip-Delta on Qwen2.5-Math-7B-Instruct:**\\n| | Greedy | Sampling | Pass@16 |\\n| -------- | ------- | ------- | ------- |\\n| Qwen2.5-Math-7B-Instruct | 83.3 | 52.76 | 86.6 |\\n| +PPO w. SR | 83.16 | 79.95 | 92.46 |\\n| +PPO w. SR+PR-Clip-Delta | **83.38** | **81.22** | **92.60** | \\n\\n- Our main focus is to understand how to unlock the potential of PRM in RL training without additional data. Further improving the most competitive LLMs would require techniques beyond RL and reward design, such as developing stronger PRMs and using additional data. These directions are beyond the scope of our work and we will leave them as future work. We believe our work could still provide guidance on designing rewards when stronger PRMs or more advanced RL algorithms are available.\\n\\nWe hope our response addresses your concerns, and we welcome any further questions or suggestions.\"}", "{\"summary\": \"The paper examines the use of learned reward models, such as Outcome-supervised (ORM) and Process-supervised Reward Model (PRM), to enhance reasoning in LLMs during RL training. They show that ORM does not improve and PRM can even hinder RL training due to reward hacking. The authors propose \\u201cClip\\u201d and \\u201cDelta\\u201d to address this.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The proposed idea is new, interesting, and well-motivated.\", \"The paper is easy to read and follow.\", \"The addressed problem is of significance.\"], \"weaknesses\": [\"More details on the experimental setup could be provided for reproducibility, including the reward thresholds for the Clip and Delta mechanisms and hyperparameters for PPO.\", \"Smaller LLMs may offer a larger scope of improvement, and so the proposed methods may seem to have been successful. However, to confirm the advantage, experiments on larger LLMs may be necessary; e.g., the paper reports GPT-4o-2024-08-06\\u2019s performance to be 92.9 on GSM8K, which is higher than almost all other models and variations, and offers less scope of improvement.\"], \"questions\": [\"Any discussion on the computational overhead or ease of integration of Clip and Delta into existing workflows would be beneficial.\", \"Although mathematical reasoning is a good testbed, any comments on the potential applicability of these techniques in non-mathematical or multi-modal reasoning tasks?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response by Authors\", \"comment\": \"We thank the reviewer for the valuable feedback and the interest in our work. Herein, we address the points raised by the reviewer:\\n\\n\\n## 1. Experiment Setup for Reproducibility\\n> \\\"More details on the experimental setup could be provided for reproducibility, including the reward thresholds for the Clip and Delta mechanisms and hyperparameters for PPO.\\\"\\n- We have added the hyperparameters for PPO in Appendix D.\\n- In our experiments, we set $\\\\eta$ to be the average value of PRM rewards of all reasoning steps related to one question in a training batch. We implement the training pipeline based on ReaLHF [1] by implementing the success reward and the dense rewards provided by PRM. \\n\\n[1] Mei, Zhiyu, et al. \\\"ReaLHF: Optimized RLHF Training for Large Language Models through Parameter Reallocation.\\\" arXiv preprint arXiv:2406.14088 (2024).\\n\\n## 2. Experiments on Larger LLMs\\n> \\\"Smaller LLMs may offer a larger scope of improvement, and so the proposed methods may seem to have been successful. However, to confirm the advantage, experiments on larger LLMs may be necessary; e.g., the paper reports GPT-4o-2024-08-06's performance to be 92.9 on GSM8K, which is higher than almost all other models and variations, and offers less scope of improvement.\\\"\\n- **Our main focus is investigate how to unleash the potential of PRM in RL training for LLM reasoning.** Our main experiments over a diverse set of LLMs in Sec. 4 present a comparison between RL training with success rewards only and RL training that combines PR-Clip-Delta and success rewards. We believe our evaluation presents a solid empirical justification for the effectiveness of the proposed methods.\\n- If a more powerful PRM is available or additional data is available, we believe the proposed approaches in our work could also benefit stronger and larger models. However, these directions are out of the scope of our work and we leave them as future work.\\n\\n## 3. Computational Overhead\\n> \\\"Any discussion on the computational overhead or ease of integration of Clip and Delta into existing workflows would be beneficial.\\\"\\n- Both the Clip and the Delta mechanisms are straightforward to integrate into existing workflows.\\n- The implementation of the Clip mechanism involves computing the mean of the reward as a threshold after reward calculation, followed by applying the formula specified in Eq. 5. This additional step is computationally lightweight and seamlessly fits within the existing reward processing pipeline.\\n- The Delta mechanism requires computing the difference between rewards from two adjacent steps, a process that is both conceptually simple and computationally efficient. As such, neither method introduces significant overhead, ensuring their ease of adoption.\\n- We have also added related discussion in Appendix D.\\n\\n## 4. Extension to Other Tasks\\n> \\\"Although mathematical reasoning is a good testbed, any comments on the potential applicability of these techniques in non-mathematical or multi-modal reasoning tasks?\\\"\\n\\nWe thank the reviewer for raising the important question of broader applicability. While our work focuses on mathematical reasoning, the proposed techniques are not inherently limited to this domain. The approach can indeed be extended to other reasoning tasks, such as coding challenges or multi-modal reasoning, as long as two key conditions are met: (1) a task-specific partitioning of reasoning steps is feasible, and (2) a reliable success signal is available. We consider these applications promising directions for future exploration.\\n\\nWe hope our response addresses your concerns, and we welcome any further questions or suggestions.\"}", "{\"comment\": \"Thank you for your rebuttal.\\n\\nI still find the delta mechanism unmotivated. Specifically, I don't understand this sentence \\\"The Delta mechanism ensures the steps promoted by RL training are aligned with the PRM, which promotes the correct step in this case.\\\" The delta mechanism changes the rewards in such a way that the sum of the altered (after delta) rewards, .i.e. the altered return, becomes equal to the last reward essentially (right?). I still don't get why that should be useful. \\n\\nThe clipping mechanism makes sense as you just don't want the PRM to offer much rewards as it happens to be not really trustworthy. \\n\\nHowever, still the mixing of the two becomes confusing. From the training curves, I can see that it is only the clip+delta that achieves a higher test accuracy, which is the only accuracy that really matters (right?). Therefore, I don't see the delta or the clip improving the test accuracy alone. I am looking at epoch 5 because I don't know why we should look mid-training and not the final outcome. \\n\\nI thank the authors. I think pointing the issue of PRMs is important and that is the main reason I gave the paper a 6. Unfortunately, I don't find the paper motivated enough to give an 8. If anything, a 6 is a bit on the upper side as well as I don't find the proposed solution to be generalizable or motivated and at the end the improvements are marginal.\"}", "{\"title\": \"Response by Authors\", \"comment\": \"We thank the reviewer for the valuable feedback. Here we respond to the points raised by the reviewer,\\n\\n## 1. Discussion on the Evaluation of Larger Models\\n> \\\"We saw in the empirical results that the improvement in the small models was higher than in the larger ones. I disagree with the authors about the importance of evaluating the study or the proposed techniques on larger models.\\\"\\n\\n- We believe that the evaluation of larger and stronger models can help us understand the limitations of our approach and identify directions to further improve the most competitive LLMs with RL training. Potential improvement directions include augmenting the training distribution and enhancing the quality of PRM. We also believe that our approach is applicable to stronger models when a better PRM is available. However, these directions are out of the scope of our work and we leave them as future work.\\n\\n## 2. Effect of ORM in RL Training\\n> \\\"As far as I understand, the outcome reward (OR) gives a likelihood that the solution is correct. That means a solution can have a success reward of zero but still be close to the correct solution; hence, the likelihood could be relatively high. It can also be high because of the uncertainty of the reward model. Why do you still think that did not help when combined with the success reward? I believe a probability of success in the form of reward, instead of ones and zeros, should contribute to the learning. If I understood the outcome reward wrong, then the explanation of the reward models was unclear.\\\"\\n- We would like to clarify for the inappropriate description that ORM does not help RL training in the submission version. Actually, introducing ORM improves the sample efficiency but not significantly improve the final accuracy. The data is given in the following table. \\n- We agree with the reviewer that ORM can bring some benefits. Indeed the training targets of ORM and the critic of PPO with success rewards in the last token are equivalent. Therefore, introducing ORM offers a better initialization for the last-token value. However, the benefit of ORM would diminish when sufficient RL training is conducted.\\n\\n\\n**Test Greedy accuracy of SR vs. SR+OR across training epochs:**\\n| | Epoch 1 | Epoch 2 | Epoch 3 | Epoch 4 | Epoch 5 |\\n| -------- | ------- | ------- | ------- | ------- | ------- |\\n| SR | 29.26 | 29.72 | 29.86 | 30.16 | **30.58** |\\n| SR+OR | **29.32** | **30.04** | **30.08** | **30.48** | 30.57 |\\n\\n**Test accuracy of SR vs. SR+OR:**\\n| | Greedy | Sampling |\\n| -------- | ------- | ------- |\\n| SR | **30.58** | 27.05 |\\n| SR+OR | 30.57 | **27.12** |\\n\\nBoth are tested on MATH test set.\"}", "{\"title\": \"Response by Authors (Part I)\", \"comment\": \"We sincerely appreciate the reviewer for providing valuable feedback. We hope the following response can address the concerns raised by the reviewer.\\n\\n## 1. On the Novelty of the Clip and Delta Mechanisms\\n> \\u201cThe proposed method is, in my perspective, not novel. To solve sparse reward problems, the first go-to should be reward-shaping methods. \\u201d\\n- We emphasize that, though reward shaping is a common approach in RL, it is challenging to design proper rewards to promote better reasoning skills, as shown by our in-depth analysis in Fig. 3 (also [available here](https://i.postimg.cc/Gp1Cq89Y/case-study.jpg)).\\n- We conduct additonal case studies (Fig. 3) and theoretical analysis (Appendix E) on why the proposed methods can effectively utilize the PRM to enhance reasoning through RL training. Beyond their practical effectiveness, our findings provide valuable insights for the community on how to design effective RL rewards for reasoning tasks. \\n- **Please refer to the Global Response and the revised paper for more details.**\\n\\n## 2. Explanation of the Delta Mechanism and Connection to Potential-based Reward Shaping\\n> \\u201cThe proposed delta mechanism can be understood as a potential-based reward shaping method where PRM is the potential function. ... the paper should position itself well against reward-shaping methods that have been extensively studied in the past.\\\"\\n- It is insightful to point out the connection between the delta mechanism and potential-based reward shaping methods. However, we clarify that **the Delta mechanism is NOT a potential-based reward shaping (PBRS) method** because the Delta mechanism does not fit in the mathematical form of PBRS.\\n- By the definition of potential-based reward shaping in [1], a potential-based shaping function takes the form $F(s,a,s')=\\\\gamma \\\\Phi(s')-\\\\Phi(s)$ for transition $s,a,s'$. However, the Delta mechanism uses $r(q,p^{(k)})=r_{process}(q,p^{(k)})-r_{process}(q,p^{(k+1)})$ and takes the form $F(s,a,s')=\\\\Phi(s')-\\\\Phi(s'')$ for transition $s,a,s',a',s''$ by defining $s=(q,p^{(k-1)}),s'=(q,p^{(k)}), s''=(q,p^{(k+1)}),\\\\Phi(s')=r(q,p^{(k)}),\\\\Phi(s'')=r(q,p^{(k+1)})$.\\n- To better interpret the Delta mechanism, we provide additional theoretical analysis in Appendix E. We show the policy gradient of RL training with SR+PR-Delta can be given in a step-wise manner,\\n $$\\n \\\\nabla_\\\\theta J_{r}(\\\\pi_\\\\theta)=\\\\mathbb E_{q\\\\sim \\\\mathcal D,s\\\\sim \\\\pi_\\\\theta(\\\\cdot|q)}\\\\large[\\\\nabla_\\\\theta \\\\log\\\\pi_\\\\theta(s|q)\\\\cdot \\\\text{Correct}(q,s)+ \\\\alpha\\\\cdot\\\\underbrace{\\\\sum_{k=1}^{K-1}\\\\nabla_\\\\theta \\\\log\\\\pi_\\\\theta(s^{(k)}|q,p^{(k-1)})\\\\cdot r_{process}(q, p^{(k)})}_{\\\\text{Effect of the Delta mechanism}}\\\\large] +\\\\text{KL term}\\n $$\\n Thus, **the Delta mechanism enhances the single-step PRM rewards**. \\n- We have added the discussion on related reward shaping methods in Sec 2.\\n\\n\\n[1] Ng, Andrew Y., Daishi Harada, and Stuart Russell. \\\"Policy invariance under reward transformations: Theory and application to reward shaping.\\\" Icml. Vol. 99. 1999.\"}", "{\"summary\": \"This work discusses the role of reward models in enhancing the reasoning capability of LLM models during RL training, which is under-explored compared to inference. It shows the impact of popular reward models, the Outcome-supervised Reward Model (ORM) and the Processsupervised Reward Model (PRM), on the performance of LLMs after being combined with the success sparse reward signals on math problems. It was observed that such reward models may not help or even hurt the performance of the LLM due to the reward hacking issue. This work proposes two reward refinement methods to tackle this issue, named Clipping and Delta. Those techniques have shown potential in stabilizing the RL training of a collection of LLMs when evaluated on the MATH and GSM8K benchmarks. In addition, performance improvement across all evaluated LLMs can be obtained by carefully designing a reward function for pure RL training.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-written, and the problem is nicely motivated.\", \"The empirical results are enough to assess the potential of the reward model, with some techniques for mitigating reward hacking during RL training to enhance the LLM reasoning.\", \"I appreciate the case study shown in Fig. 2 and the others added in the appendix.\"], \"weaknesses\": \"## Major Comments:\\n- We saw in the empirical results that the improvement in the small models was higher than in the larger ones. I disagree with the authors about the importance of evaluating the study or the proposed techniques on larger models.\\n\\n## Minor Comments:\\n- Typo in Line 057: \\\"on the reward models, it remains **un**clear whether the reward models can provide additional training\\\".\", \"questions\": [\"As far as I understand, the outcome reward (OR) gives a likelihood that the solution is correct. That means a solution can have a success reward of zero but still be close to the correct solution; hence, the likelihood could be relatively high. It can also be high because of the uncertainty of the reward model. **Why do you still think that did not help when combined with the success reward?** I believe a probability of success in the form of reward, instead of ones and zeros, should contribute to the learning. If I understood the outcome reward wrong, then the explanation of the reward models was unclear.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Acknowledgment\", \"comment\": \"Dear Authors,\\n\\nThank you for your response! I will retain my score while also keeping my confidence score.\"}", "{\"summary\": \"This paper introduces two methods to prevent reward hacking while training LLM with reinforcement learning algorithms, Clipping and Delta. The paper shows superior results when adding these techniques on top of process rewards, compared to baselines without these techniques.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The paper is written clearly, and well-motivated. Extensive experiments are done, ablating different parts of the algorithm to show that clipping and delta mechanisms are actually helping.\", \"weaknesses\": \"Reward clipping is not a novel idea and has been studied before. Reward hacking, even more so. IMO, a valid baseline for this paper needs to be a well-written PRM. However, looking at the baselines used in this paper, whenever PR is used the result is worse than SR alone. Other papers have already shown PRM is better than ORM alone. Clearly, the baseline is too weak and isn't setup correctly.\\n\\nBasically, the authors are showing evidence that SR + PR + reward-hacking-prevention is better than SR alone, or SR + plain PR. I don't find this to be a new contribution.\\n\\nThe work also lacks direct examination of how these two methods work under the hood. For example, how often is the reward non-zero after clipping? i.e. Is this truly a dense reward? When delta is applied on top of clipping, the argument that the reward is bound no longer holds, because r_process(q, p_k+1) can be 0. So what's really the mechanism in this case?\\n\\nIn Table 2, the experimental results show pretty marginal (or, non-existent) improvements on greedy decoding, on the only valid RL baseline, Qwen2.5-Math-7B-Instruct. The other comparisons are pretty meaningless since we all know RL works & it's not the contribution of this paper.\", \"questions\": \"How often is the reward non-zero after clipping? i.e. Is this truly a dense reward?\\n\\nWhy does none of the PRM method in baselines work?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"Dear Authors,\\n\\nThank you for the explanation on the delta mechanism. Here is my understanding: The PRM gives rewards for each step. The delta mechanism acts on top of the PRM rewards? So, it changes the rewards for the step to \\\"the change in the rewards of a step\\\". In your setting, if the solution is step1, step2, step3 and the PRM rewards are r1, r2, and r3, you are changing it to 0, r2-r1, r3-r2. That is what I am saying is unmotivated. The PRM itself should describe whether a step is good quality. But, you change it to whether the step was higher quality than the previous step. \\n\\nAs for my score, I will keep my score the same because I think 6 is on the higher end and the reason for the 6 is the importance of pointing out PRM shortcomings.\"}", "{\"title\": \"Motivation and Novelty of the Delta Mechanism\", \"comment\": \"To address the common questions regarding the Delta mechanism, we would like to clarify the motivation behind the Delta mechanism and its core novelty.\\n\\nThe key novelty of the Delta mechanism lies in its ability to **guide RL training to optimize the single-step PRM reward for each individual reasoning step**. By contrast, PR encourages longer generations, even when the PRM rewards of individual steps are sub-optimal, as shown in our case studies (Fig. 2 and Fig. 3).\\n\\nThe contrast becomes clearer through an analysis of the policy gradient for PR and PR-Delta. Following [1], the policy gradient of RL training can be given in a step-wise manner,\\n $$\\n \\\\nabla_\\\\theta J_{r}(\\\\pi_\\\\theta)=\\\\mathbb E_{q\\\\sim \\\\mathcal D,s\\\\sim \\\\pi_\\\\theta(\\\\cdot|q)}\\\\large[\\\\nabla_\\\\theta \\\\log\\\\pi_\\\\theta(s|q)\\\\cdot \\\\text{Correct}(q,s)+ \\\\alpha\\\\cdot\\\\sum_{k=1}^{K}\\\\nabla_\\\\theta \\\\log\\\\pi_\\\\theta(s^{(k)}|q,p^{(k-1)})\\\\cdot \\\\text{Return}(q,p^{(k)})\\\\large] +\\\\text{KL term}\\n $$\\nwhere $\\\\text{Return}(q,p^{(k)})$ denotes the return starting from step $k$. \\n\\n\\n1. **RL Training with SR+PR.**\\n The return is given by,\\n $$\\n \\\\text{Return}(q,p^{(k)})=\\\\underbrace{\\\\sum_{k'\\\\ge k}r_{process}(q, p^{(k')})}_{\\\\text{Sum of PRM rewards}}\\n $$\\n\\n This formulation encourages producing a larger number of reasoning steps, even when the PRM rewards are low. For example, an incorrect solution with 10 steps and an average PRM reward of 0.3 could be preferred over a partially correct solution with only 3 steps and a higher average PRM reward of 0.8. The RL training thus prioritizes the longer solution with lower average rewards.\\n\\n2. **RL Training with SR+PR-Delta.**\\n Introducing the Delta mechanism, the return of PR-Delta starting from step $k$ is adjusted by,\\n $$\\n \\\\text{Return}(q,p^{(k)})=\\\\left[\\\\sum_{k'=k}^{K-2}r_{process}(q,p^{k'})-r_{process}(q,p^{(k'+1)})\\\\right]+r_{process}(q,p^{(K-1)})=\\\\underbrace{r_{process}(q,p^{(k)})}_\\\\text{PRM reward at step k}\\n $$\\n Here, the training optimizes **the PRM reward for each intermediate step** rather than focusing on the aggregated reward. Since the PRM is trained to predict the quality of one step, this approach could enhance the reasoning process step by step.\\n\\n**For further details, please refer to the revised paper, particularly Section 4.**\\n\\n[1] Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 2018.\"}", "{\"title\": \"Response by Authors\", \"comment\": \"We thank the reviewer for the valuable feedback and the interest in our work. Here we address the concerns raised by the reviewer:\\n\\n## 1. Limited Novelty\\n> \\u201cThe paper's contribution seems more incremental than innovative: It primarily applies known RL techniques to the specific context of LLM reasoning. The main finding that reward models can be exploited during training is somewhat expected given the general challenges of reward design in RL. The solutions proposed (Clipping and Delta mechanisms) are straightforward applications of existing RL principles.\\u201d\\n\\n- We emphasize that the Clip and the Delta mechanisms are introduced to effectively the PRM to promote better reasoning skills, instead of simple applications of existing techniques. \\n- We conduct additional case studies in Fig. 3 (also [available here](https://i.postimg.cc/Gp1Cq89Y/case-study.jpg)) and theoretical analysis in Appendix E to better illustrate the motivation and the effect of the proposed methods. \\n- Our in-depth analysis in Sec. 4 reveals why they are effective for PRM in RL training. Beyond their practical effectiveness, our findings provide valuable insights for the community on how to design effective RL rewards for reasoning tasks. \\n- **Please refer to the Global Response and the revised paper for more details.**\\n \\n\\n## 2. Missing Reward Shaping related work\\n\\nWe thank the reviewer for pointing out the missed related work on reward shaping. We have added a discussion about the related works in the revised version.\\n\\n\\nWe hope our response addresses your concerns, and we welcome any further questions or suggestions.\"}", "{\"metareview\": \"This paper presents a way to utilize process reward models (PRMs) for RL training in LLM reasoning. The paper studies some issues with PRM rewards, followed by a discussion of how adjusting process rewards via clipping and delta mechanisms can help improve performance. The paper presents interesting takeaways -- including positive biases of the PRM, reward hacking of the PRM, and shows performance with their suggested changes to PRM rewards (though some ablations are still not clear; see discussion with Reviewer 7bHH).\\n\\nWhile I enjoyed reading the paper, there're some weaknesses: (1) it is unclear if these findings hold true for any PRM, (2) in my opinion and in the opinion of some reviewers as well, the analysis section does not decouple issues with using process rewards vs errors in the process reward model itself, (3) connections to RL literature are not made explicit, and (4) the reviewers are not totally convinced by the ablations for the method proposed (although to the authors' credit, the proposed method is simple). \\n\\nUnfortunately while the paper has the potential to be quite impactful, due to the above issues we are not able to accept the paper at this moment.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised some valid points regarding deeply understanding the issues with PRMs and the efficacy of the approaches proposed by the authors. I believe that a lot of the points raised are valid, and while the authors did address several of these points, as I mention above, there are some points which are still not made explicitly clear.\\n\\nThe only reviewer to champion the paper provides a low confidence of 2, and I do agree with the points raised by other reviewers.\"}", "{\"summary\": \"This paper investigates how to effectively use reward models during reinforcement learning (RL) training to improve LLMs' mathematical reasoning abilities. The authors discover that traditional reward models can either be ineffective (in the case of Outcome-supervised Reward Models) or lead to reward hacking through unnecessary repetition of steps (in the case of Process-supervised Reward Models). To address these issues, they introduce two novel techniques - \\\"Clipping\\\" and \\\"Delta\\\" mechanisms - that help prevent reward exploitation while maintaining the benefits of process rewards. Using these refined reward techniques, they demonstrate consistent improvements across various LLMs, including enhancing the performance of state-of-the-art models like Qwen2.5-Math-7B-Instruct on the MATH and GSM8K benchmarks.\\n\\nThe key innovation is showing that while reward models are useful for inference-time improvements, they need careful refinement to be effective during RL training. Their solutions help stabilize training and prevent the model from gaming the reward system through repetitive or unnecessary steps.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Strengths:\\n1. Identifying the issues with using rewards: The authors systematically analyze both Outcome-supervised Reward Models (ORM) and Process-supervised Reward Models (PRM), revealing important limitations of each approach. They demonstrate that ORMs, despite working well at inference time, don't provide additional benefits beyond success rewards during training. More significantly, they uncover a serious reward hacking issue with PRMs, where models learn to game the system by repeating simple or unnecessary steps to achieve high rewards. This analysis is particularly valuable because previous work primarily focused on using rewards at inference time, making this identification a novel contribution to the field.\\n2. Experiments: The authors conduct comprehensive evaluations across multiple model sizes (1.5B and 7B parameters) and variants (including Qwen2, Qwen2.5, and both general and math-specific models) on standard benchmarks like MATH and GSM8K. Their experimental design includes thorough ablation studies showing the impact of different components, careful comparisons of various reward mechanisms, and detailed analysis using synthetic examples to demonstrate reward hacking.\", \"weaknesses\": [\"Limited Novelty:\", \"Clipping is indeed a well-established technique in the RL community, commonly used to stabilize training\", \"The concept of bounded rewards is a fundamental principle in RL, not a new innovation\", \"The Delta mechanism, while presented as novel, essentially implements reward shaping - another well-known concept in RL\", \"The paper's contribution seems more incremental than innovative: It primarily applies known RL techniques to the specific context of LLM reasoning. The main finding that reward models can be exploited during training is somewhat expected given the general challenges of reward design in RL. The solutions proposed (Clipping and Delta mechanisms) are straightforward applications of existing RL principles.\"], \"missing_reward_shaping_related_work\": \"1. Policy Invariance Under Reward Transformations:\\u00a0Theory and Application to Reward Shaping, Ng et.al \\n2. Hindsight credit assignment, Harutyunyan et.al\\n3. Rudder: return decomposition for delayed rewards, Arjona et.al\\n4. Align-RUDDER: Learning from few demonstrations, Patil et.al\\n5. Modern Hopfield networks for return decomposition for delayed rewards, Widirich et.al\", \"questions\": \"My current criticism is regarding the novelty of the paper. I am willing to increase my score if the authors convince me that the clipping and delta are novel enough.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Further Clarification\", \"comment\": \"We really appreciate the reviewer's immediate feedback. We would like to explain the motivation of the Delta mechanism further.\\n\\n## Return of the Delta Mechanism\\n\\n> \\\"The delta mechanism changes the rewards in such a way that the sum of the altered (after delta) rewards, .i.e. the altered return, becomes equal to the last reward essentially (right?). I still don't get why that should be useful.\\\"\\n\\nWe emphasize that, for any intermediate reasoning step $k$, **the Delta mechanism redefines the return of the altered reward to be equivalent to the PRM reward at step $k$, rather than at the last step**. \\n\\nThis can be better illustrated by analyzing the return of the Delta mechanism, for any $1\\\\le k\\\\le K-1$\\n$$\\n\\\\text{Return}(q,p^{(k)})=\\\\left[\\\\sum_{k'=k}^{K-2}r_{process}(q,p^{k'})-r_{process}(q,p^{(k'+1)})\\\\right]+r_{process}(q,p^{(K-1)})=\\\\underbrace{r_{process}(q,p^{k})}_\\\\text{PRM reward at step k instead of the last step}\\n$$\\n\\nTherefore **RL training would optimize the single-step PRM reward for any intermediate step** rather than at the last step. Since the PRM is trained to predict the quality of one step, optimizing the PRM reward could enhance each reasoning step instead of the last step only.\\n\\nAnother interpretation is that, as we prove in Appendix F.2, PRM (trained with automatically generated labels) are actually learning the value of a stronger policy. With the Delta mechanism, for any intermediate step, RL training optimizes the probability of correctly completing a solution by using a stronger policy. \\n\\n## Mixing the Clip and the Delta Mechanisms\\n\\nSince the Clip and the Delta mechanisms focus on tackling issues from different aspects, as we analyzed in Sec 4, each of them alone can not tackle all issues. In practice, we can observe the Clip and the Delta alone can improve the training, but the optimization process could be less stable, as shown in Fig. 10. Combining both approaches can tackle the issues we identified and achieve a stable improvement on the RL training, as shown by the curve of PR-Clip-Detla in Fig. 10.\"}", "{\"title\": \"Response by Authors (Part I)\", \"comment\": \"We sincerely thank the reviewer for the detailed feedback to help us improve the quality of our work. We hope the following response can address the points raised by the reviewer.\\n\\n## 1. Position of Our Work \\n> \\u201cOther papers have already shown PRM is better than ORM alone.\\u201d\\n\\nWe emphasize that, to the best of our knowledge, though PRM is shown to be effective for test-time search for LLM reasoning, **it is still unclear whether ORM and PRM can bring additional benefits to RL training with success rewards.** Prior to our work, [1], [2], [3] and [4] apply ORM/PRM as reward shaping for RL training, but the actual effects of ORM and PRM are unclear due to a lack of sufficient ablation study. **Our work also presents the first thorough study on the issues of simply adopting PRM as reward shaping in RL training.**\\n\\n[1] Shao, Zhihong, et al. \\\"Deepseekmath: Pushing the limits of mathematical reasoning in open language models.\\\" arXiv preprint arXiv:2402.03300 (2024).\\n\\n[2] Yang, An, et al. \\\"Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement.\\\" arXiv preprint arXiv:2409.12122 (2024).\\n\\n[3] Wang, Peiyi, et al. \\\"Math-shepherd: Verify and reinforce llms step-by-step without human annotations.\\\" arXiv preprint arXiv:2312.08935 (2023).\\n\\n[4] Havrilla, Alex, et al. \\\"Teaching large language models to reason with reinforcement learning.\\\" ICML 2024 Workshop AI4MATH.\\n\\n## 2. On the Novelty of Our Work\\n> \\u201cBasically, the authors are showing evidence that SR + PR + reward-hacking-prevention is better than SR alone, or SR + plain PR. I don't find this to be a new contribution.\\u201d \\u201cReward clipping is not a novel idea and has been studied before. Reward hacking, even more so.\\u201d\\n\\n- We emphasize that, besides preventing training collapse, the mechanisms can effectively utilize the PRM to promote better reasoning skills in RL training.\\n- We conduct additional case studies in Sec. 4 and theoretical analysis in Appendix E to better illustrate the motivation and the effect of the proposed methods.\\n- Although the proposed methods are simple, our in-depth analysis in Sec. 4 reveals why they are effective for PRM in RL training. Our analysis presents valuable insights to the community on how to design effective RL rewards for LLM reasoning.\\n- **Please refer to the Global Response and the revised paper for more details.**\\n\\n## 3. Performance of PRM-based Baselines\\n> \\u201cHowever, looking at the baselines used in this paper, whenever PR is used the result is worse than SR alone. \\u2026. Clearly, the baseline is too weak and isn't setup correctly.\\u201d \\u201cWhy does none of the PRM method in baselines work?\\u201d\\n\\nWe would like to clarify that baselines are set up correctly. PRM-based baselines can improve RL training in training accuracy or test accuracy but can only achieve sub-optimal final performance. Training curves of all methods are provided in Appendix A.2.\\n\\nHere we discuss the specific performance of baselines.\\n- **PR-Normed**: During early training epochs, PR-Normed shows a sign of overfitting and only achieves sub-optimal test accuracy compared with SR, as shown in the following table. RL training of SR+PR-Normed also suffers from significant performance degradation after Epoch 3.\\n\\n**Train/Test accuracy of SR vs. SR+PR-Normed across training epochs:**\\n| train acc./ test acc. | Epoch 1 | Epoch 2 | Epoch 3 | Epoch 4 | Epoch 5 |\\n| -------- | ------- | ------- | ------- | ------- | ------- |\\n| SR | 30.54 / **29.26** | 34.29 / **29.72** | **35.16** / **29.86** | **38.52** / **30.16** | **38.07** / **30.58** |\\n| SR+PR-Normed | **32.23** / 28.68 | **36.13** / 29.66 | 34.79 / 25.9 | 23.18 / 9.8 | 25.39 / 12.36 |\\n\\nwhere \\\"a/b\\\" denotes training accuracy $a\\\\%$ and test greedy accuracy $b\\\\%$. Tested on MATH test set.\\n\\n\\n- For **SR+PR**, we conduct an ablation study on the reward shaping coefficient $\\\\alpha$. When $\\\\alpha$ is small, SR+PR only achieves sub-optimal final accuracy. As we increase $\\\\alpha$, training collapse happens, and the test accuracy gets worse. The results are shown in the following table. \\n- We also note that, regardless of the value of $\\\\alpha$, PR has some fundamental issues that may lead the LLM to learn undesired behavior patterns, as pointed out in our additional case studies in Fig. 3 of Sec. 4 (also [available here](https://i.postimg.cc/Gp1Cq89Y/case-study.jpg)).\\n\\n**Test accuracy of SR vs. SR+PR with different reward shaping coefficient $\\\\alpha$ across training epochs:**\\n| | Epoch 1 | Epoch 2 | Epoch 3 | Epoch 4 | Epoch 5 |\\n| -------- | ------- | ------- | ------- | ------- | ------- |\\n| SR | **29.26** | 29.72 | 29.86 | 30.16 | **30.58** |\\n| SR+PR ($\\\\alpha=0.02$) | 29.25 | **30.00** | **29.88** | **30.22** | 30.08|\\n| SR+PR ($\\\\alpha=0.05$) | 21.9 | 18.92 | / | / | / | \\n| SR+PR ($\\\\alpha=0.1$) | 14.10 | / | / | / | / | \\n| SR+PR ($\\\\alpha=0.2$) | 11.16 | / | / | / | / | \\n\\nwhere we stop training after training collapse is observed. Tested on MATH test set.\"}", "{\"summary\": \"This work has two important messages for the field of LLM Reasoning: 1) the rewards models, especially the PRM can be hackable. Therefore, integrating them into LLM RL training may lead to hacking this reward performing worse than just removing it. 2) The paper argues that by clipping and then delta-ing the PRM rewards one can actually use these rewards models to gain a boost in their performance.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"I think the paper has important messages for LLM reasoning community:\\n\\n1) The message that PRMs are hackable is important and valuable. Also, the paper digs into showing what goes wrong which provides insight into what actually goes wrong when using them. That the LLM trained with these PRMs leans towards some steps that are correct but does not move us closer to a solution. I think this contribution is also important. \\n\\n 2) Also, the paper shows limiting these rewards is not obvious. It is only by mixing the clipping and the delta method that they can boost the performance. This is interesting that it shows the problem is serious, although I don't agree that the clip-delta completely solved it.\", \"weaknesses\": \"I think the paper focuses a lot on the boost it gets from mixing the clip and delta. However, I have some concerns if the clip and delta is a generalizable approach. First, the delta mechanism seems unmotivated. There is not nothing wrong with being unmotivated if it works super well. But, I think the gains are modest. The delta mechanism rewards action `a_t` if the reward of action `a_{t+1}` is less which is a very strong change to the RL environment. I understand that some interesting properties like the summation and small change to the returns arise from this, but I believe this is still an unmotivated change to the environment. However, maybe I don\\u2019t have the correct intuition on this. It is quite surprising that none of them work well on their own which makes me wonder if there is any intuition behind why this mixture of both works better? I am just afraid that this is not that well motivated.\", \"questions\": \"1- What is the motivation behind the delta mechansim? Why should we assign credit to a_t if the reward of a_{t+1} is lower?\\n\\n2- Can we consider the gains by mixing the clip and delta mechanism are strong and not modest? \\n\\n3- Your PRM is trained on automatic data. Is this possible that this hackability issue is because of the noise caused by the automatic procedure and would be avoided when trained on human-data or better PRM data generation (I understand better PRM data generation is itself a research question)?\\n\\n4- Is there any intuition on why both clip-and-delta should be applied to get a boost and none of them work well in isolation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response by Authors (Part II)\", \"comment\": \"## 3. Analysis of PR \\\\& Reward Shaping Coefficient for SR+PR\\n> \\u201cWhile the paper is suspecting the observed phenomenon as \\\"reward hacking\\\", I would say this is closer to the wrong usage of reward functions.\\u201d\\n- We agree with the reviewer that \\u201creward hacking\\u201d is an inaccurate description. We have revised the description of the phenomenon in SR+PR training as \\u201ctraining collapse\\u201d.\\n- We also agree that simply using PRM as dense rewards (PR) could be a wrong usage of the rewards. In our additional case studies, as shown in Fig. 3 (also [available here](https://i.postimg.cc/Gp1Cq89Y/case-study.jpg)), we identify the reward misspecification issue of PR. The proposed Delta mechanism can ensure the steps promoted by RL training are aligned with the PRM by optimizing single-step PRM rewards.\\n\\n> \\\"The paper does not tell us about the hyperparameter choice of $\\\\alpha$, but with some large enough $\\\\alpha$, this may lead to agent preferring long wrong generations over short correct generations.\\\"\\n\\n- For SR+PR, we conduct an ablation study on the reward shaping coefficient $\\\\alpha$. When $\\\\alpha$ is small, though the test accuracy of SR+PR can be marginally higher than or on par with SR in early training epochs, SR+PR only achieves sub-optimal final accuracy. As we increase $\\\\alpha$, training collapse happens, and the test accuracy gets worse. The results are given in Appendix A.3 and also shown in the following table. \\n\\n**Test accuracy of SR vs. SR+PR with different reward shaping coefficient $\\\\alpha$ across training epochs:**\\n| | Epoch 1 | Epoch 2 | Epoch 3 | Epoch 4 | Epoch 5 |\\n| -------- | ------- | ------- | ------- | ------- | ------- |\\n| SR | **29.26** | 29.72 | 29.86 | 30.16 | **30.58** |\\n| SR+PR ($\\\\alpha=0.02$) | 29.25 | **30.00** | **29.88** | **30.22** | 30.08|\\n| SR+PR ($\\\\alpha=0.05$) | 21.9 | 18.92 | / | / | / | \\n| SR+PR ($\\\\alpha=0.1$) | 14.10 | / | / | / | / | \\n| SR+PR ($\\\\alpha=0.2$) | 11.16 | / | / | / | / | \\n\\nwhere we stop training after training collapse is observed. Tested on MATH test set.\\n\\n- Regardless of the value of $\\\\alpha$, PR has some fundamental issues, as pointed out in our case studies in Fig. 3. Therefore, simply using PRM rewards as dense rewards could not reliably help enhance the reasoning skills of the LLM.\\n\\n\\n## 4. Comparison Between ORM and PRM\\n\\n> \\\"After introducing ORM, the paper does nothing about it; since it is introduced and evaluated, we need at least an analysis on why it does not help.\\\"\\n- We would like to clarify for the inappropriate description that ORM does not help RL training in the submission version. Actually, introducing ORM improves the sample efficiency but not significantly improve the final accuracy. \\n- We hypothesize this is because the training targets of ORM and the critic of PPO with success rewards in the last token are equivalent. Therefore, introducing ORM offers a better initialization for the last-token value. However, the benefit of ORM would diminish when sufficient RL training is conducted.\\n\\n**Test Greedy accuracy of SR vs. SR+OR across training epochs:**\\n| | Epoch 1 | Epoch 2 | Epoch 3 | Epoch 4 | Epoch 5 |\\n| -------- | ------- | ------- | ------- | ------- | ------- |\\n| SR | 29.26 | 29.72 | 29.86 | 30.16 | **30.58** |\\n| SR+OR | **29.32** | **30.04** | **30.08** | **30.48** | 30.57 |\\n\\n**Test accuracy of SR vs. SR+OR:**\\n| | Greedy | Sampling |\\n| -------- | ------- | ------- |\\n| SR | **30.58** | 27.05 |\\n| SR+OR | 30.57 | **27.12** |\\n\\nBoth are tested on MATH test set.\\n\\n- We have updated the analysis of ORM in the revised version.\\n\\n> \\\"What's the difference between PRM and ORM\\\"\", \"there_are_two_main_differences_between_prm_and_orm\": [\"Training approach: PRM is trained to predict the correctness of a step. ORM is trained to predict the probability of the final correctness.\", \"Density of reward signals: OR uses ORM to provide sparse rewards that only occur at the end of the solution. PRM provides step-level dense rewards in PR.\", \"> \\\"Why does ORM not suffer from the same problem?\\\"\", \"The sparse rewards provided by ORM in the range of [0,1] naturally ensure a bounded RL objective. Therefore, training stability can be easily achieved.\", \"> \\\"Why PRM does help when ORM does not help?\\\"\", \"PRM provides dense training signals at the reasoning step level rather than just at the solution level. This enables better credit assignment during training, helping the LLM to correct reasoning errors more effectively.\", \"The effect of PRM can be better illustrated in the right case of Fig. 3. In this case, when the final answer of a solution is incorrect and the success reward is zero, PRM provides guidance on which steps are sub-optimal and should be avoided.\"]}", "{\"title\": \"Global Response\", \"comment\": \"We sincerely thank all reviewers for their insightful and constructive feedback. In response, we have substantially revised and enhanced our paper to address the concerns raised and to further strengthen our contributions. Below, we outline the key updates and clarifications:\\n\\n## 1. New Case Studies and Theoretical Analysis\", \"we_highlight_key_aspects_of_our_revision_as_follows\": [\"**New Case Studies on Issues of PR (Fig. 3)**: We provide new in-depth case studies in Fig. 3 (also [available here](https://i.postimg.cc/Nj894gSL/case-study.jpg)) showing issues when the PRM simply serves as reward shaping in addition to the success reward. Specifically:\", \"**Intrinsic biases of PRM** can be largely exploited by the training LLM to generate sub-optimal behavior patterns.\", \"**Reward Misspecification Issue** could mistakenly promote incorrect steps through RL training.\", \"**Additional Mechanism Analysis (Sec 4.2)**: Our detailed analysis of the mechanisms highlights how the proposed mechanisms address specific issues,\", \"**Clip mechanism mitigates the intrinsic biases of PRM** by bounding the rewards to an upper threshold, preventing the LLM from obtaining high rewards through undesired patterns.\", \"**The Delta mechanism tackles the reward misspecification issue** by optimizing single-step PRM rewards, ensuring the steps promoted by RL training are aligned with the PRM.\", \"**Theoretical Insights for the Delta Mechanism (Appendix E)**: We provide theoretical analysis showing how the Delta mechanism makes RL training optimizates single-step PRM rewards alongside success rewards.\", \"**Additional Experiments & Training Curves (Appendix A)**: Additional experiments and training curves help better understand the limitations of baseline methods.\", \"## 2. Novelty of Our Work\"], \"we_highlight_the_novelty_of_our_work_as_follows\": [\"**The First Study of PRM Issues in RL Training for LLM Reasoning:** Our work presents the first thorough study of the issues when simply adopting PRM as reward shaping in RL training for LLM reasoning. We find the intrinsice biases of PRM and the reward misspecification issue may lead to sub-optimal performance. The impact of these issues on RL training for LLM reasoning tasks has not been systematically analyzed in prior literature [1,2,3,4].\", \"**Simple Techniques and Practical Insights for Effective RL Training:** Although the proposed mechanisms are simple in design, they are supported by thorough analysis and experiments that show their ability to utilize PRM effectively, providing valuable rewards in RL training for LLM reasoning. Our studies and analysis present valuable insights to the community on how to design effective RL rewards for LLM reasoning.\", \"## 3. Summary of Major Revisions\", \"**Fig.3**: Updated with new case studies and insights into mechanism effects.\", \"**Sec 4.1**: Discussion on the training results of PR \\\\& OR. Additional case studies on the issues of PR.\", \"**Sec 4.2**: The motivation and detailed explanation of the Clip and the Delta mechanism.\", \"**Sec 5**: Updated PPO training results with success rewards only in Table 2.\", \"**Appendix A**: Training curve and additional experiments.\", \"**Appendix D**: Hyperparameters and training setup.\", \"**Appendix E**: Theoretical analysis of the Delta mechanism and PRM.\", \"[1] Shao, Zhihong, et al. \\\"Deepseekmath: Pushing the limits of mathematical reasoning in open language models.\\\" arXiv preprint arXiv:2402.03300 (2024).\", \"[2] Yang, An, et al. \\\"Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement.\\\" arXiv preprint arXiv:2409.12122 (2024).\", \"[3] Wang, Peiyi, et al. \\\"Math-shepherd: Verify and reinforce llms step-by-step without human annotations.\\\" arXiv preprint arXiv:2312.08935 (2023).\", \"[4] Havrilla, Alex, et al. \\\"Teaching large language models to reason with reinforcement learning.\\\" ICML 2024 Workshop AI4MATH.\"]}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your feedback and for your understanding of the delta mechanism.\\n\\nWe emphasize that **an RL algorithm** promotes steps with **high returns** instead of **high step-wise rewards**. Here the return refers to the rewards accumulated over the course of the solution.\\n\\nThe Delta mechanism indeed operates the rewards. It calibrates how the RL algorithm processes the rewards. In our example shown in Fig. 3, although a correct step has a high PRM reward, its return is much lower due to a shorter solution. Therefore, the RL algorithm might erroneously favor the incorrect step with a low PRM reward. With the Delta mechanism, we correct the **returns** so that they are **consistent with the PRM rewards**.\\n\\nPlease refer to Fig. 3 and Appendix E.1 in the revised paper for more details.\\n\\nWe hope this explanation clarifies the motivation behind the delta mechanism.\"}" ] }
F07ic7huE3
Bisimulation Metric for Model Predictive Control
[ "Yutaka Shimizu", "Masayoshi Tomizuka" ]
Model-based reinforcement learning (MBRL) has shown promise for improving sample efficiency and decision-making in complex environments. However, existing methods face challenges in training stability, robustness to noise, and computational efficiency. In this paper, we propose Bisimulation Metric for Model Predictive Control (BS-MPC), a novel approach that incorporates bisimulation metric loss in its objective function to directly optimize the encoder. This optimization enables the learned encoder to extract intrinsic information from the original state space while discarding irrelevant details. BS-MPC improves training stability, robustness against input noise, and computational efficiency by reducing training time. We evaluate BS-MPC on both continuous control and image-based tasks from the DeepMind Control Suite, demonstrating superior performance and robustness compared to state-of-the-art baseline methods.
[ "Reinforcement Learning", "Model-based reinforcement learning", "optimal control", "MPC" ]
Accept (Poster)
https://openreview.net/pdf?id=F07ic7huE3
https://openreview.net/forum?id=F07ic7huE3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x4OIKV1V1Z", "vgOj8LY2LO", "nbVvUk09Le", "kJeGfAmp1X", "ggQrsRBRSf", "g9AnAlNQ9h", "dhHkF4fRRx", "XecmKwFcdw", "XTf7DYIEeL", "UjPQI3xKw7", "R60L5ZvSUz", "Q1JtR69e9j", "N3v9RuT4mp", "GYc3LRf9Wm", "Fd4C1peENM", "EqcNJO1YdH", "DKHtJJ0vgp", "BIoQUgELPD", "AY7EKNAGre", "8DQhe87NAO", "4LUD3mTwfO", "21oVCAFWxF", "0Upo5zTd4F" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment" ], "note_created": [ 1731812292327, 1732082913275, 1732221591618, 1732083514335, 1730907684300, 1732221256699, 1731809524284, 1732202811011, 1731805408643, 1732221617745, 1732635574007, 1737523493964, 1730220324964, 1730692433735, 1732527982159, 1732624017327, 1731803460351, 1732527846154, 1731804311731, 1731808521550, 1734740857355, 1730409484773, 1732221639227 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2251/Authors" ], [ "ICLR.cc/2025/Conference/Submission2251/Authors" ], [ "ICLR.cc/2025/Conference/Submission2251/Authors" ], [ "ICLR.cc/2025/Conference/Submission2251/Authors" ], [ "ICLR.cc/2025/Conference/Submission2251/Reviewer_w1co" ], [ "ICLR.cc/2025/Conference/Submission2251/Authors" ], [ "ICLR.cc/2025/Conference/Submission2251/Authors" ], [ "ICLR.cc/2025/Conference/Submission2251/Reviewer_seSt" ], [ "ICLR.cc/2025/Conference/Submission2251/Authors" ], [ "ICLR.cc/2025/Conference/Submission2251/Authors" ], [ "ICLR.cc/2025/Conference/Submission2251/Reviewer_seSt" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2251/Reviewer_seSt" ], [ "ICLR.cc/2025/Conference/Submission2251/Reviewer_CXNw" ], [ "ICLR.cc/2025/Conference/Submission2251/Authors" ], [ "ICLR.cc/2025/Conference/Submission2251/Authors" ], [ "ICLR.cc/2025/Conference/Submission2251/Authors" ], [ "ICLR.cc/2025/Conference/Submission2251/Authors" ], [ "ICLR.cc/2025/Conference/Submission2251/Authors" ], [ "ICLR.cc/2025/Conference/Submission2251/Authors" ], [ "ICLR.cc/2025/Conference/Submission2251/Area_Chair_npgw" ], [ "ICLR.cc/2025/Conference/Submission2251/Reviewer_hjrQ" ], [ "ICLR.cc/2025/Conference/Submission2251/Authors" ] ], "structured_content_str": [ "{\"title\": \"Genenral Response\", \"comment\": \"First and foremost, I would like to sincerely thank all reviewers for their insightful comments and constructive feedback. They help me critically analyze and enhance my approach. I have replied to each reviewer about their questions and concerns. Here I would like to summarize the changes I made in my manuscript and respond to common questions and concerns raised by multiple reviewers.\\n\\n# Summary of Paper Revisions\\n- A new subsection has been added to Appendix A, explaining why BS-MPC outperforms TD-MPC, particularly in noisy environments (as suggested by [seSt](https://openreview.net/forum?id=F07ic7huE3&noteId=N3v9RuT4mp).\\n- A new subsection has been added to Appendix C (C.2), detailing the computational structure differences between BS-MPC and TD-MPC using pseudo-code. It gives detailed explanations why BS-MPC has faster computational time than TD-MPC.\\n- Sentences highlighted by [hjrQ](https://openreview.net/forum?id=F07ic7huE3&noteId=21oVCAFWxF) have been corrected and revised to improve clarity and precision.\\n\\n# Responses to Common Questions\\nQ1. BS-MPC appears sensitive to hyperparameters. Is there a way to mitigate this issue?\\n\\nA1. Indeed, tuning the new weight term $c_4$ is essential for achieving optimal performance. In our experiments, we employed grid search for this hyperparameter, which involved six candidates ($10^{-8}$, $0.0001$, $0.001$, $0.01$, $0.1$, $0.5$), making the process computationally manageable. Our findings indicate that smaller values are suitable for high-dimensional and complex tasks, while larger values work better for simpler tasks. Additionally, we believe incorporating techniques such as reward discretization and normalization layers from TD-MPC2 could simplify or even eliminate the need for this tuning in the future.\\n\\n---\\n\\nQ2. The contribution appears incremental, as BS-MPC mainly integrates the bisimulation metric into TD-MPC.\\n\\nA2. We agree that BS-MPC builds upon TD-MPC, and inherits many components from TD-MPC. However, we believe that incremental yet meaningful advancements are essential to driving progress in reinforcement learning and beyond. Many significant contributions in the field, such as TD-MPC2, Dreamer, and Rainbow, have advanced the state of the art by refining and extending existing methods. In this context, we view BS-MPC as a step forward, offering two key contributions:\\n\\n- Integration of the Bisimulation Metric: While the bisimulation metric has been utilized in some model-based reinforcement learning approaches, this is the first work to incorporate it into a planning-based method. This simple addition provides a novel perspective on how to improve planning robustness and performance. BS-MPC is also the first paper that provides theoretical support for the estimated value.\\n\\n- Modification of the Computational Flow: BS-MPC revisits and modifies the computational flow used in TD-MPC for calculating costs. Although the values being computed are similar, our changes significantly improve computational efficiency and stabilize the training process compared to TD-MPC.\\n\\nWhile these changes may appear incremental, we believe that minimal modifications to existing simple algorithms can play a crucial role in advancing the field and addressing important challenges within the community. Maintaining the simplicity of TD-MPC while addressing three of its open problems demonstrates the importance of targeted refinements to simple yet powerful algorithms. We hope this contribution will inspire further exploration in the community.\\n\\n---\\n\\nQ3. Are there additional computational costs associated with bisimulation metric loss, especially in high-dimensional latent spaces?\\n\\nA3. We estimate the calculation cost in Equation (8). Let d denote the dimension of latent space, the Gaussian Wasserstein Distance computational cost is O(d^3). Therefore, the computational cost of the bisimulation metric for Batch size B data becomes O(B d^3). This is the additional computational cost at each time step compared with TD-MPC. However, since BS-MPC can employ parallel computations, BS-MPC still archives faster calculation time even with this additional computational cost. We add Appendix C.2 to our paper for more details about the parallel computational explanations.\"}", "{\"title\": \"Author repsonse to reviewer seSt Part3\", \"comment\": \"We have conducted the additional experiments for TD-MPC2 and updated the PDF with the new results. Following the reviewer\\u2019s suggestion, we have moved Appendix D to Section 5 to better highlight the experimental results, specifically by including the TD-MPC2 results in the main body of the paper.\"}", "{\"title\": \"Friendly reminder\", \"comment\": \"As the deadline for the rebuttal is in one week, we would like to kindly remind the reviewer to review our response to your comments.\\n\\nIf there are any clarifications or additional details we can provide to assist with your review, please do not hesitate to let us know.\\n\\nThank you for your time and valuable feedback.\"}", "{\"title\": \"Paper Update\", \"comment\": \"As suggested by the reviewer [seSt](https://openreview.net/forum?id=F07ic7huE3&noteId=N3v9RuT4mp), we have conducted the additional experiments for TD-MPC2 and updated the PDF with the new results. In the updated PDF, we have moved Appendix D to Section 5 to better highlight the experimental results, specifically by including the TD-MPC2 results in the main body of the paper.\\n\\nWe have finished responding to all of the comments we got from the reviewers, and we are looking forward to having discussions with each reviewer.\"}", "{\"summary\": \"This paper presents a new method for model-based reinforcement learning (MBRL) called BS-MPC. The key innovation lies in incorporating a bisimulation metric loss into the objective function to improve encoder stability, robustness to noise, and computational efficiency. By using the bisimulation metric, BS-MPC aims to ensure behavioral equivalence in the latent space, maintaining key characteristics of the original state space. The method is benchmarked against the Temporal Difference Model Predictive Control (TD-MPC) and other model-free and model-based methods on various tasks, showing superior stability and resilience to noise.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper provides a new perspective by integrating the bisimulation metric to address known challenges in MBRL, particularly around stability and robustness to noise. The experimental results demonstrate how BS-MPC performs well in both state-based and image-based tasks, showing increased resilience to noise and achieving faster training times due to parallel computation. The theoretical analysis adds depth by bounding cumulative rewards in the learned latent space, suggesting that BS-MPC retains meaningful state information effectively.\", \"weaknesses\": \"While the theoretical foundations are thorough, certain explanations, particularly on encoder stability and noise resilience, could be made clearer to broaden accessibility. The parameters require extensive tuning, which may be impractical for real-world applications lacking automated parameter selection. Additionally, the approach to introducing perturbations, particularly with visual distractions, doesn\\u2019t seem entirely effective. It would be beneficial to test perturbations that are more representative of realistic environmental changes, which could better showcase BS-MPC\\u2019s resilience.\", \"questions\": \"Could the authors expand on the sensitivity of BS-MPC to the parameter c4 and potential ways to reduce this dependency?\\n\\nHow does BS-MPC perform in scenarios with dynamic backgrounds that align with the movement instead of pure noise?\\n\\nAre there additional computational costs associated with bisimulation metric loss, especially in high-dimensional latent spaces?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author repsonse to reviewer seSt\", \"comment\": \"Thank you for the comment and suggestion. We updated our pdf to change the color of the graph and clarify the diagram. In addition, we continue working on the theory side to see if we can give more theories and statements. We will send the reviewers our new findings as soon as we get new update.\"}", "{\"title\": \"Author repsonse to reviewer seSt Part2\", \"comment\": \"Q4- I suggest a more rigorous approach for robustness, such as comparing the Lipschitz constant of your controller to TD-MPC's.\\n\\nA4. We explored the possibility of computing the Lipschitz constant for both our approach and TD-MPC. However, due to the complexity introduced by the use of neural networks in both methods, deriving a rigorous mathematical analysis proved to be challenging.\\n\\nTo address this limitation, we have added an intuitive explanation in Appendix A.2 to provide insight into why BS-MPC demonstrates improved robustness compared to TD-MPC. This discussion complements the response provided in Q5 and offers a qualitative understanding of the underlying factors.\\n\\n---\\n\\nQ5. I believe if you can theoretically confirm in which case studies your method is going to perform better than TD-MPC, it will strengthen your paper significantly.\\n\\nA5. While we have not yet established a rigorous theoretical framework to precisely predict such cases, we have included an additional analysis in Appendix A.2. This section explores the factors that contribute to BS-MPC's improved performance in certain scenarios.\\n\\nWe kindly invite the reviewer to review this new addition. In summary, BS-MPC demonstrates superior performance when TD-MPC struggles to train its encoder to effectively capture the essential information in the original state $s$. This provides a partial explanation for BS-MPC's higher scores in noisy environments (Figure 5), where the TD-MPC encoder is more likely to fail to learn an accurate representation.\\n\\n---\\n\\nQ6. As the authors have mentioned, the hyperparameters play a huge role, and one wonders how much time is needed to tune these parameters.\\n\\nA6. In our experiment, we use the grid search for hyperparameter optimization. Since there are only 6 candidates, it does not take much time to adjust the parameter. We also find smaller values are suitable for more complex environments (e.g. dogs and humanoids) from our experimental results, and this insight helps us to narrow down the range for searching parameters.\\n\\n---\\n\\nQ7. Contribution weakness\\n\\nA7. We wrote our thoughts in [general reposense](https://openreview.net/forum?id=F07ic7huE3&noteId=x4OIKV1V1Z).\"}", "{\"comment\": \"I appreciate your response, and your dedication in including new method to compare. However, I believe this work to be too heuristic and incremental, and as I stated, formal theorem on your findings would have strengthen your paper. I will maintain my rating according to the paper's contributions and comments.\", \"minor\": \"I think your new figures are confusing, I suggest to use the same color for each method throughout the paper. TD-MPC2 is pink in Figure 4, yet it is purple in Figure 3.\"}", "{\"title\": \"Author response to reviewer w1co\", \"comment\": \"We are grateful for the reviewer\\u2019s constructive suggestions and provide our responses below.\\n\\nQ1. Could the authors expand on the sensitivity of BS-MPC to the parameter c4 and potential ways to reduce this dependency?\\n\\nA1. Thank you for pointing this out. We also think this tuning term is the biggest limitation of the proposed method. In our experiments, we employed grid search for this hyperparameter, which involved six candidates ($10^{-8}$, $0.0001$, $0.001$, $0.01$, $0.1$, $0.5$) making the process computationally manageable. Our findings indicate that smaller values are suitable for high-dimensional and complex tasks, while larger values work better for simpler tasks. Additionally, we believe incorporating techniques such as reward discretization and normalization layers from TD-MPC2 could simplify or even eliminate the need for this tuning in the future.\\n\\n---\\n\\nQ2. How does BS-MPC perform in scenarios with dynamic backgrounds that align with the movement instead of pure noise?\\n\\nA2. Regrettably, I did not do the experiment in an environment where scenarios with dynamic backgrounds align with the movement. We are setting up the Carla simulator (vehicle simulator) to test the proposed method to see the performance now. We will update the paper as soon as we get the results. However, we believe that BS-MPC is more likely to achieve better performance compared to other methods because the deep bisimulation metric is shown to have good performance in the Carla simulator (dynamic backgrounds that align with the movement)[1].\\n\\n---\\n\\nQ3. Are there additional computational costs associated with bisimulation metric loss, especially in high-dimensional latent spaces?\\n\\nA3. We estimate the calculation cost in Equation (8). Let d denote the dimension of latent space, the Gaussian Wasserstein Distance computational cost is O(d^3). Therefore, the computational cost of the bisimulation metric for Batch size B data becomes O(B d^3). This is the additional computational cost at each time step compared with TD-MPC. However, since BS-MPC can employ parallel computations, BS-MPC still archives faster calculation time even with this additional computational cost. We add Appendix C.2 to our paper for more details about the parallel computational explanations.\\n\\n[1]. Zhang, A., McAllister, R., Calandra, R., Gal, Y., & Levine, S. (2021). Learning Invariant Representations for Reinforcement Learning without Reconstruction. https://arxiv.org/abs/2006.10742\"}", "{\"title\": \"Friendly reminder\", \"comment\": \"As the deadline for the rebuttal is in one week, we would like to kindly remind the reviewer to review our response to your comments.\\n\\nIf there are any clarifications or additional details we can provide to assist with your review, please do not hesitate to let us know.\\n\\nThank you for your time and valuable feedback.\"}", "{\"comment\": \"Thank you for adding this theorem.\\n\\nI believe this theorem was added in haste, I originally asked the authors that if you are claiming robustness, you should do Lipschitz constant analysis. To which authors responded that it is intractable/computationally expensive, without knowing the Lipschitz constants, how can you claim you are more robust?\\n\\nMoreover, equation 20 is misleading, Lipschitz continuity also applies to your encoding, and you also suffer from noise, if you employ global Lipschitz continuity. I personally think this theorem would hurt your paper rather than helping it.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper is concerned with a new, model-based reinforcement learning method, which utilizes bi-simulation metric. This formulation helps with the stability of training and helps with the robustness of the controller. General idea is that they seek to find states that \\\"behave\\\" similarly, and intuition behind it is that one can use similar control input for similar states, which simplifies the controller and make it more interpretable. Authors learn an encoder which maps states of the environment to another domain, in which similar states are identified, and are mapped to the same representation (roughly speaking). Then, this representation is utilized to train a controller. Novelty of this work is to add the encoder loss directly into the training procedure.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Paper is well written, all ideas are clearly explained. Overall, paper is mathematically rigorous. Authors do a good job in walking the reader through the preliminaries, highlighting distinctions, and presenting their work.\\n\\nFurthermore, paper presents more than 20 case studies, which helps immensely in comparing their performance to the state of the art methods.\", \"weaknesses\": \"Improvements and contributions seem incremental, and overall not that beneficial according to the case studies (Figure 6), one only sees improvements in few case studies (such as humanoid walk, dog walk and trot), and basically identical to TD-MPC in others (such as humanoid stand, pendulum, cheetah).\\nThe only major contribution is adding the bi-simulation metric loss to the loss function, the other two contribution naturally follow from this addition. \\n\\nAs authors have mentioned, the hyper parameters play a huge role, and one wonders how much time is needed to tune these parameters.\", \"questions\": \"1- Based on your case studies, your method does not seem to change the episode return that much, except for a few cases like dog walk, dog trot and humanoid walk. In your Appendix, you provide a rough explanation of why that may be. Looking at Figure 7 and 8, it appears that loss, and consequently the gradient, explode (in TD-MPC), however, in RL, gradient clipping is used to tackle this issue. When you compared your method to TD-MPC, did you employ gradient clipping for it or not? It does not appear to be a fair comparison if you didn't, and perhaps that is why your method did not do significantly better in other case studies; as loss did not *explode*.\\n\\n2- I suggest you revise the experiments' section, and run all case studies on TD-MPC2 rather than TD-MPC. I realize it is touched upon in appendix D, however since TD-MPC2 is the updated version, I suspect it would make for a more fair comparison. Moreover, adding a thorough comparison would certainly present your method better; between training time, sample complexity, number of parameters used, and hyper parameter tuning and different configurations; it will strengthen your case if you could show how it might fail. I would also like to know the rationale behind using TD-MPC in the main body and mentioning TD-MPC2 in the appendix. \\nTo the best of my knowledge, TD-MPC2 can have many parameters, since it can be used on different domains. Thus, it is not an apple to apple comparison, unless it is specifically mentioned in the paper.\\n\\n\\n3- Is there any theoretical results on why your method requires less parameters, and converges faster? or is it mainly based off of experiments? Since Theorem 3 only offers an upper bound for expected cumulative rewards for the optimal policy. \\n\\n4- I suggest a more rigorous approach for robustness, such as comparing the Lipschitz constant of your controller to TD-MPC's.\\n\\n5- I believe if you can theoretically confirm in which case studies your method is going to perform better than TD-MPC, it will strengthen your paper significantly.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes BS-MPC, a model-based reinforcement learning approach that introduces bisimulation metrics (loss) on top of TD-MPC. Compare to TD-MPC, BS-MPC has an explicit encoder loss term, the adaptation of bisimulation metric, and parallelizing the BS loss. The authors found that their approach can improve training stability, robustness against input noise, and computation efficiency, which is validated on a set of simulation environments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-written and easy to follow. The overall presentation is good. The approach is sound and makes sense to the reviewer. The experimental results look promising, compared to TD-MPC.\", \"weaknesses\": \"However, the major weakness is its novelty.\\n1. The whole framework is based on TD-MPC. The difference is the authors introduce the Bisimulation metric and its corresponding loss design, which are from the existing literature, as stated in the paper. \\n2. It is also a common way to introduce additional regularization loss terms for the encoder of model-based RL. \\n3. The theoretical analysis mainly borrows from the existing work and does not have any major significant result. It would be great if the authors could provide \\\"Under the BS loss training error, what's the performance gap between the final converged policy by their approach and the ideally optimal policy\\\", and \\\"Theoretically, how much performance gain could their approach improve, compared to TD-MPC.\\\"\", \"questions\": \"When you say BS-MPC improves computation efficiency, what does it mean? Is it compared to TD-MPC?\\nIt is surprising to me because BS-MPC has one additional loss term compared to TD-MPC and why is BS-MPC faster to run? \\n\\nWith the above question, I'd like to know the latency overhead of BS loss term in the training.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Update2\", \"comment\": \"We added a new experimental result in Carla simulator suggested by reviewer [w1co](https://openreview.net/forum?id=F07ic7huE3&noteId=ggQrsRBRSf).\\n\\n## Summary of the new experimental result\\nBS-MPC outperforms TD-MPC, even in environments where the background image moves along with the car. Additionally, while the BS-MPC loss converges to zero, the TD-MPC loss diverges as training progresses.\\n\\n## Where to find the results\\nWe have uploaded the results to the supplemental material. The reviewer can find the relevant figures in the figures folder, specifically named \\\"carla_reward.png\\\" and \\\"carla_consistency_loss.png.\\\"\"}", "{\"title\": \"New theorem added\", \"comment\": \"Upon the comments by reviewer seSt, we added a new theorem that mathematically shows when BS-MPC can outperform TD-MPC. We have updated the paper and uploaded it. The new theorem can be found in Appendix A.2.\\n\\nHere is a short summary of the theorem. \\n \\nWhen we have an original state $s$ and a noisy state $\\\\tilde{s} = s + \\\\xi$, where $xi$ is a noise. Assume $s$ and $\\\\tilde{s}$ are biosimilar. Under some assumptions (specified in the paper), we can get the following upper bound.\\n\\n- TD-MPC\\n$ \\\\|\\\\phi^{\\\\text{TM}}(s) - \\\\phi^{\\\\text{TM}}(\\\\tilde{s})\\\\|_1 \\\\leq K \\\\|\\\\xi\\\\|_1$\\n\\n- BS-MPC\\n $\\\\|\\\\phi^{\\\\text{BM}}(s) - \\\\phi^{\\\\text{BM}}(\\\\tilde{s})\\\\|_1 \\\\leq \\\\mathcal{L}$\\n\\nThis result suggests that the upper bound of BS-MPC does not depend on the noise $\\\\xi$, whereas the upper bound of TD-MPC depends on the noise $\\\\xi$. This suggests that BS-MPC can identify two biosimilar states, but TD-MPC can classify them as totally different states (The difference between $\\\\phi(s)$ and $\\\\phi(\\\\tilde{s})$ can be very large). \\n\\nTo the best of our knowledge, this is the first work to mathematically show the conditions under which TD-MPC fails and the proposed method succeeds. \\nWe also highlight that our approach is supported by mathematical foundations that are comparable to or stronger than those of other model-based or TD-MPC family methods [1][2][3][4][5].\\n\\n[1]. Hansen et al. (2024). TD-MPC2: Scalable, Robust World Models for Continuous Control.\\n[2]. Zhao et al. (2023). Simplified Temporal Consistency Reinforcement Learning. In Proceedings of the 40th International Conference on Machine Learning (ICML'23).\\n[3]. Yang et al. (2024). MoVie: Visual Model-Based Policy Adaptation for View Generalization. In Proceedings of the 37th International Conference on Neural Information Processing Systems (NeurIPS '23).\\n[4]. Zheng et al. (2024). TACO: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning. In Proceedings of the 37th International Conference on Neural Information Processing Systems (NeurIPS '23).\\n[5]. Ji et al. (2023). Dual Policy-Based TD-Learning for Model Predictive Control. In Proceedings of the 2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC),\"}", "{\"title\": \"Author response to reviewer CXNw\", \"comment\": \"We are grateful for the reviewer\\u2019s constructive suggestions and provide our responses below.\\n\\nQ1. When you say BS-MPC improves computation efficiency, what does it mean? Is it compared to TD-MPC? It is surprising to me because BS-MPC has one additional loss term compared to TD-MPC and why is BS-MPC faster to run?\\n\\nA1. Yes, BS-MPC achieves faster computational speed than TD-MPC. The primary reason for the speed-up is the structure change in the computation. In BS-MPC, we encode all states s_{t:t+H} at once (z_{t:t+H} = s_{t:t+H}), and hence eliminate the sequential computation. However, TD-MPC needs to compute its latent state z_t by using the previous step latent state z_{t-1}. Therefore, its calculation flow includes sequential computation, which becomes the bottleneck of TD-MPC. I added a new Appendix in C.2 to give more details about this explanation by using pseudo-code. Also, please refer to Figure 2 for the computational flow difference.\\n(Adding bisimulation term does not contribute to the speed-up, but it helps BS-MPC to be robust against the noise in the original state space S)\\n\\n---\\n\\nQ2. With the above question, I'd like to know the latency overhead of BS loss term in the training.\\n\\nA2. We estimate the calculation cost in Equation (8). Let d denote the dimension of latent space, the Gaussian Wasserstein Distance computational cost is O(d^3). Therefore, the computational cost of the bisimulation metric for Batch size B data becomes O(B d^3).\\n\\n---\\n\\nQ3. Contribution weakness\\n\\nA3. We wrote our thoughts in [general reposense](https://openreview.net/forum?id=F07ic7huE3&noteId=x4OIKV1V1Z).\"}", "{\"comment\": \"We appreciate your feedback and would like to inform you that we have completed the experiments in scenarios with dynamic backgrounds aligned with movement. Due to the computational demands of the CARLA environment, we were able to run the experiment with only a single seed. The results indicate that BS-MPC outperforms TD-MPC, even in environments where the background image moves along with the car. Additionally, while the BS-MPC loss converges to zero, the TD-MPC loss diverges as training progresses.\\n\\nWe have uploaded the results to the supplemental material. The reviewer can find the relevant figures in the figures folder, specifically named \\\"carla_reward.png\\\" and \\\"carla_consistency_loss.png.\\\"\"}", "{\"title\": \"Author Response to reviewer hjrQ\", \"comment\": \"We are grateful for the reviewer\\u2019s constructive suggestions and provide our responses below.\", \"q1\": \"\\u201cWe assume that the learned policy in BS-MPC continuously improves throughout training and eventually converges to the optimal policy \\u03c0\\u2217, which supports Theorem 1.\\u201d This seems to be a very strong assumption. For example, by looking at the training curve, the return does not improve monotonically, and we have no information about if the learned policy is converging to the optimal policy. How do you explain such a strong assumption? Is it possible to remove it for the theoretical results?\", \"a1\": \"Yes! We believe we can loosen this assumption by adapting the theory from the Robust Bisimulation Metric[1]. In this paper, they enable the bisimulation metric to have the same guarantee with some more realistic assumptions. In our paper, we choose the optimal policy assumption \\\\pi^* because we would like to make the algorithm and proof simpler and straightforward. However, as the reviewer suggested, we believe our method can be more adaptable with realistic assumptions by using their robust bisimulation metric.\\n\\n[1]. Kemertas, M., & Aumentado-Armstrong, T. (2021). \\\"Towards Robust Bisimulation Metric Learning.\\\" https://arxiv.org/pdf/2006.10742\\n\\n---\", \"q2\": \"In Fig. 4, why do all non-MPC based methods only have results till 10M steps?\", \"a2\": \"In image-based experiments (fig 4), we cite our baseline results from their official papers. I assume the computational cost is very expensive and authors just run their code until 1M. We run BS-MPC and TD-MPC until 3M to show the stability of the proposed methods. We also choose the computational steps based on TD-MPC2 paper [2].\\n\\n[2]Hansen, N., Su, H., & Wang, X. (2024). TD-MPC2: Scalable, Robust World Models for Continuous Control. https://arxiv.org/abs/2310.16828\\n\\n---\\n\\nQ3. There are several typos in the paper. \\u201cIn BS-MPC, the latent dynamics are modeled using an MLP. We also model the latent dynamics model with an MLP\\u201d I believe BS-MPC should be TD-MPC. \\u201cwe sample M action sets from Gaussian distribution N (\\u03bc0, \\u03c30) based on the initial mean\\u03bc0 and standard deviation \\u03c30\\u201d Missing spacing between mean and \\\\mu_0\\n\\nA3. We sincerely thank the reviewer for identifying these typos and providing detailed feedback. We have carefully revised the manuscript to address these issues. We corrected the sentence and simplified the phrasing of the affected sentences to enhance clarity. A revised version of the paper has been uploaded for your review.\\n\\n---\\n\\nQ4. Contribution weakness\\n\\nA4. We wrote our thoughts in [general reposense](https://openreview.net/forum?id=F07ic7huE3&noteId=x4OIKV1V1Z).\"}", "{\"title\": \"Author repsonse to reviewer seSt Part1\", \"comment\": \"We are grateful for the reviewer\\u2019s constructive suggestions and provide our responses below.\\n\\nQ1. Based on your case studies, your method does not seem to change the episode return that much, except for a few cases like dog walk, dog trot and humanoid walk. In your Appendix, you provide a rough explanation of why that may be. Looking at Figure 7 and 8, it appears that loss, and consequently the gradient, explode (in TD-MPC), however, in RL, gradient clipping is used to tackle this issue. When you compared your method to TD-MPC, did you employ gradient clipping for it or not? It does not appear to be a fair comparison if you didn't, and perhaps that is why your method did not do significantly better in other case studies; as loss did not explode.\\n\\nA1. Yes, we do use gradient clipping for TD-MPC in our experiments. Specifically, the setting file (yaml file) used in our experiments includes a parameter labeled 'clip_grad_norm,' which specifies the gradient clipping threshold. In Figure 8, we present the gradient values before clipping the gradient for transparency.\\n\\nAlthough gradient clipping is applied, we observe that the error values continue to grow and eventually explode, leading to degraded performance. This issue highlights the inherent instability in TD-MPC. Moreover, while BS-MPC achieves similar performance to TD-MPC in many cases, it demonstrates significantly superior robustness in noisy environments, as shown in Figure 5.\\n\\nFinally, we want to emphasize that the primary objective of BS-MPC is to provide a more stable learning process and performance compared to TD-MPC, rather than merely outperforming it across all metrics. This is one of the open problems in TD-MPC as shown in Figure 1. \\n\\n---\\n\\nQ2. I suggest you revise the experiments' section, and run all case studies on TD-MPC2 rather than TD-MPC. I realize it is touched upon in appendix D, however since TD-MPC2 is the updated version, I suspect it would make for a more fair comparison. Moreover, adding a thorough comparison would certainly present your method better; between training time, sample complexity, number of parameters used, and hyper parameter tuning and different configurations; it will strengthen your case if you could show how it might fail. I would also like to know the rationale behind using TD-MPC in the main body and mentioning TD-MPC2 in the appendix.\\nTo the best of my knowledge, TD-MPC2 can have many parameters, since it can be used on different domains. Thus, it is not an apple to apple comparison, unless it is specifically mentioned in the paper.\\n\\nA2. Thank you for the suggestion. We agree with the reviewer's idea and are working on this. Since TD-MPC2 does not provide some of the results in image-based tasks, we are currently running their official code.\", \"there_are_three_primary_reasons_why_we_chose_td_mpc_over_td_mpc2_as_our_baseline\": \"- Parameter Count: As the reviewer highlighted, TD-MPC2 requires significantly more parameters compared to both BS-MPC and TD-MPC. While BS-MPC and TD-MPC have approximately 1M parameters, TD-MPC2 requires 5M parameters to achieve comparable performance. This increased complexity substantially raises computational costs, which is a concern given that TD-MPC already incurs high computational overhead.\\n\\n- Discrete Regression: TD-MPC2 introduces discrete regression to handle variations in reward magnitude and Q-values. While this eliminates the need for certain parameter tuning, it also necessitates discretizing these values, further increasing computational costs. This trade-off made TD-MPC2 less aligned with our goal of balancing performance with computational efficiency.\\n\\n- Network Complexity: TD-MPC2 employs additional layers and normalization mechanisms, such as SimNorm normalization, which biases the latent state representation towards sparsity and maintains a small \\u21132-norm. While these techniques have shown empirical success, they lack a robust theoretical foundation to justify their inclusion. In contrast, BS-MPC intentionally adopts the same simple network architecture as TD-MPC, augmenting it with a bisimulation metric to regularize the latent space projection. This simplicity aligns with our design philosophy while having preferrable theoretical supports.\\n\\n---\\n\\nQ3. Is there any theoretical results on why your method requires less parameters, and converges faster? or is it mainly based off of experiments? Since Theorem 3 only offers an upper bound for expected cumulative rewards for the optimal policy.\\n\\nA3. BS-MPC has the exact same number of parameters as TD-MPC as both of them use the exact same architecture. However, since our method has a different computational structure, BS-MPC can enjoy parallel computation when computing and optimizing its cost. This is attributed to the faster computational time. We add Appendix C.2 for more details about the computational structure change.\"}", "{\"metareview\": \"The paper presents Bisimulation Metric for Model Predictive Control (BS-MPC), a technique for model-based reinforcement learning that uses a bi-simulation metric to improve encoder stability, robustness to noise, and computational efficiency. The benefits of BS-MPC are demonstrated via experiments on both continuous control and image-based tasks from the DeepMind Control Suite.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided detailed responses to the concerns raised by the reviewers. Even after the discussion period the reviewers remained split on whether the paper meets the bar for acceptance. Unfortunately, reviewer CXNw did not engage with the authors or other reviewers during the rebuttal and discussion despite recommending rejection. To be fair to the authors I am not taking CXNw's rejection recommendation into consideration for the final decision.\\nReviewer seSt provided a dissenting opinion about unresolved issues regarding the claim of robustness in the paper. I thank the reviewer for engaging with the authors and providing suggestions for improving the paper. The key disagreement seems to be whether the claim of robustness over TD-MPC can be made without comparing the respective Lipschitz constants. Having carefully reviewed the discussion and the paper, I feel that the authors have provided sufficient empirical evidence to justify their claim of robustness. In particular, the paper does not claim to show that BS-MPC is provably more robust than TD-MPC, but only that experimental results show that it is more robust. While a theoretical guarantee of improved robustness would certainly improve the paper, it is currently unknown if such a guarantee exists and hence empirical evidence of robustness should be considered sufficient in view of the other scientific contributions of this paper. As it stands, especially accounting for the changes already incorporated in the paper after the discussion period, the paper makes a significant enough contribution to warrant acceptance.\"}", "{\"summary\": \"This paper considers model based reinforcement learning and proposes bisimulation metric to improve over temporal differential MPC method. The authors show theoretical analysis of the expected cumulative rewards in the latent space, and empirically demonstrate enhancement over TD-MPC and other baselines on several continuous control tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is clearly written and well presented. The proposed bisimulation metric seems to work well on the experiments considered, compared to TD-MPC and other baselines. The supplementary sections are comprehensive.\", \"weaknesses\": \"The novelty of the paper seems ambiguous. It seems that both on-policy bisimulation and TD-MPC methods are well studied for model based RL, and the authors plug bisimulation into TD-MPC.\\n\\nThere are several typos in the paper.\\n\\u201cIn BS-MPC, the latent dynamics are modeled using an MLP. We also model the latent dynamics model with an MLP\\u201d I believe BS-MPC should be TD-MPC.\\n\\u201cwe sample M action sets from Gaussian distribution N (\\u03bc0, \\u03c30) based on the initial mean\\u03bc0 and standard deviation \\u03c30\\u201d Missing spacing between mean and \\\\mu_0\", \"questions\": \"\\u201cWe assume that the learned policy in BS-MPC continuously improves throughout training and eventually converges to the optimal policy \\u03c0\\u2217, which supports Theorem 1.\\u201d\\nThis seems to be a very strong assumption. For example, by looking at the training curve, the return does not improve monotonically, and we have no information about if the learned policy is converging to the optimal policy. How do you explain such a strong assumption? Is it possible to remove it for the theoretical results?\\n\\nIn Fig. 4, why do all non-MPC based methods only have results till 10M steps?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Friendly reminder\", \"comment\": \"As the deadline for the rebuttal is in one week, we would like to kindly remind the reviewer to review our response to your comments.\\n\\nIf there are any clarifications or additional details we can provide to assist with your review, please do not hesitate to let us know.\\n\\nThank you for your time and valuable feedback.\"}" ] }
EzrZX9bd4G
BEEM: Boosting Performance of Early Exit DNNs using Multi-Exit Classifiers as Experts
[ "Divya Jyoti Bajpai", "Manjesh Kumar Hanawal" ]
Early Exit (EE) techniques have emerged as a means to reduce inference latency in Deep Neural Networks (DNNs). The latency improvement and accuracy in these techniques crucially depend on the criteria used to make exit decisions. We propose a new decision criterion BEEM where exit classifiers are treated as experts and aggregate their confidence scores. The confidence scores are aggregated only if neighbouring experts are consistent in prediction as the samples pass through them, thus capturing their ensemble effect. A sample exits when the aggregated confidence value exceeds a threshold. The threshold is set using the error rates of the intermediate exits aiming to surpass the performance of conventional DNN inference. Experimental results on the COCO dataset for Image captioning and GLUE datasets for various language tasks demonstrate that our method enhances the performance of state-of-the-art EE methods, achieving improvements in speed-up by a factor $1.5\times$ to $2.1\times$. When compared to the final layer, its accuracy is comparable in harder Image Captioning and improves in the easier language tasks. The source code is available at https://github.com/Div290/BEEM1/tree/main.
[ "Early Exits; Expert-based exiting" ]
Accept (Poster)
https://openreview.net/pdf?id=EzrZX9bd4G
https://openreview.net/forum?id=EzrZX9bd4G
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z3qIo0J1oY", "z2O2phAKnB", "xycR4BTHop", "t0QqJrGDFX", "s51igEGDDr", "olsmEgkXQq", "oCrabx4WR3", "j7aBKnamFx", "eEvCEc7dzS", "dVG30F0D5d", "cErUfS1rws", "aXA3h37i4C", "TwiXyPIc9u", "KVNMiKYtTo", "JP4aj4O92G", "JFI2a6Gj6p", "FxwgMTF7tW", "An1svlUmMa", "AhVLMZ9MKB", "6WcDkdizn6", "67t6zjIk3B", "668PmGGP7A", "5AZ9HHPo6u", "4pdiOqVqR9", "4pdLo3iqKP", "3pufPpZeXR", "3VnAe3zoOr", "0YMwzjAEHy" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1733026579892, 1732474262038, 1732099624141, 1733124237399, 1734805985949, 1737524131327, 1732274522134, 1732283882480, 1729203422447, 1732099592805, 1733225455549, 1732717105982, 1732100891627, 1732827905283, 1732136433522, 1732624417604, 1732280884076, 1733062657263, 1730688074369, 1732780294366, 1732624384559, 1732732281168, 1732554899671, 1732905152523, 1730583605522, 1732111684535, 1732514923309, 1731097904248 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Area_Chair_EXgp" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Reviewer_mL6L" ], [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Reviewer_mL6L" ], [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Reviewer_9UE5" ], [ "ICLR.cc/2025/Conference/Submission11559/Reviewer_MFiX" ], [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Reviewer_RQs8" ], [ "ICLR.cc/2025/Conference/Submission11559/Authors" ], [ "ICLR.cc/2025/Conference/Submission11559/Reviewer_RQs8" ], [ "ICLR.cc/2025/Conference/Submission11559/Reviewer_MFiX" ] ], "structured_content_str": [ "{\"comment\": \"Dear reviewer 9UE5,\\n\\nAs the discussion phase is ending soon, it is a gentle reminder to acknowledge our rebuttal.\\nAgain, we thank you for your time and effort in reviewing our work.\\n\\nRegards,\\n\\nAuthors\"}", "{\"title\": \"Gentle reminder\", \"comment\": \"Dear reviewers,\\n\\nAs the discussion phase is ending soon, it is a gentle reminder to acknowledge our rebuttal and make the necessary changes based on that. \\n\\nAgain, we thank you for your time and effort in reviewing our work. \\n\\nRegards,\\n\\nAuthors\"}", "{\"title\": \"Further clarifications.\", \"comment\": \"Q3: The proposed method is incremental, where it differs from the baseline methods of whether or not to use a weighted sum of the exit classifiers. And there\\u2019s also no ablation test to verify this point of using a weighted sum versus using simply individual exit classifiers. It's unclear whether we should treat DeeBERT and PABEE as ablation tests because the results are produced with the existing code (line 376) instead of a comparable setup (e.g. exactly the same code from this paper).\", \"a3\": \"**Increment work:** We disagree that our work is incremental. Earlier work focused on proposing a confidence metric for early exit decisions without delving into threshold values. Our work considered both aspects which makes it more complete. We note that earlier works chose thresholds based on user choice or based on a threshold that maximizes the accuracy over the validation set. In our case, we have formulated this problem as an optimization problem with a constraint. The constraint ensures overall accuracy with early exits, which is better than exiting from the final layer only. Further, we have given a theoretical analysis of this claim. Earlier works do not provide such analysis.\\n\\nIn summary, our work provides a full solution to the early exit procedure where we first proposed a new confidence metric that uses the ensemble of classifiers to make exiting more confident and reliable. We then provided a solution to the important problem of threshold selection and gave a sound method with theoretical justification. \\n\\n**Ablation tests:** The results of PABEE can be considered as an ablation study as we have used a similar setup, model, parameters, and loss objectives. Also, we performed an ablation study over what you suggested and found that it was very close to the PABEE method, hence did not explicitly put a table and section for this. \\n\\n| | SST-2 | |MNLI | |\\n| ---------------- | ----- | ----- | ----- | ----- |\\n| | Acc | Speed | Acc | Speed |\\n| Confidence-based | 89.8 | 1.78 | 81.3 | 1.7 |\\n| Patience-based | 92.3 | 1.87 | 84 | 1.85 |\\n| Ours | 92.7 | 1.98 | 84.8 | 1.96 |\\n\\nHowever, for DeeBERT, as the setup was quite different (DeeBERT performs separate training and we perform joint training) there are slight changes in the result but the PABEE result is the same as our setup is similar to PABEE. We will add these results in the appendix of the final version.\\n\\n\\nWe hope that we clarified most of your doubts and concerns, if you have any further comments, doubts or suggestions, please let us know and we will be happy to clarify them. If not, we request you to please reassess the scores. We again thank you for your time and effort in assessing our work.\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear reviewer 9UE5,\\n\\nWith only 1 day remaining in the author-reviewer discussion, we request you to please acknowledge our rebuttal and let us know if you have any further concerns.\\n\\nRegards,\\n\\nAuthors.\"}", "{\"metareview\": \"This paper proposes BEEM (Boosting Performance of Early Exit DNNs using Multi-Exit Classifiers as Experts), a new decision criterion for \\\"early exit\\\" (instead of using the full forward pass, if the intermediate output is sufficiently good, then use the intermediate output as the final output). BEEM treats the exit classifiers as experts and aggregates their predictions. Based on the consistency of the predictions and threshold, this paper set the early exit of DNN. The empirical verification is conducted on COCO and GLUE, where the proposed method shows both benchmark scores and speed-up improvements.\\n\\nThis paper has mixed opinions (three borderlines and one positive), where the reviewer who recommended borderline rejection did not engage in the discussion despite the reminders from the authors and the AC. This paper has strengths in (1) achieving meaningful improvements in inference speed while keeping the original accuracy and (2) the novelty of the idea. There were several concerns from the reviewers. For example:\\n\\n- [RQs8, mL6L] Writing quality\\n- [MFiX, 9UE5] No detailed study on the sensitivity of the threshold selection / validation set size / Patience-based vs. confidence-based\\n- [MFiX] A non-realistic speed-up metric and decoder-only exit scenario would be problematic.\\n- [9UE5] The tasks would not be generalized to the other tasks\\n- [9UE5] The aggregation of confidence scores may need additional computational complexity.\\n\\nReviewer RQs8, mL6L and MFiX acknowledged that most of their concerns were resolved after reading the authors' responses. Specifically, the authors provided ablation studies on the validation set size and the design choice (Patience-based vs. confidence-based).\\n\\nRegarding the raised concerns by Reviewer 9UE5, I think that some of them are resolved and the others are still unresolved. For example, the sensitivity of the threshold selection would be addressed by the validation set size ablation study, and the complexity issue was clarified by the authors (no additional complexity in storing the confidence values compared to previous EE methods).\\n\\nOn the other hand, I also agree with Reviewer 9UE5 that it would be great to validate the proposed method on different domains and tasks. For example, one can test the benefit of the proposed method on supervised vision classification (e.g., ImageNet) or zero-shot vision classification (e.g., CLIP) that does not need a decoder framework. As pointed out by Reviewer MFiX, this work only considers the decoder-only exit strategy, and as the authors acknowledged that \\\"the encoder cost is very minimal\\\", the proposed method would not be effective in encoder-only tasks as much as encoder-decoder frameworks. However, this would not be a case for rejection, but a gentle suggestion for improving the submission; I recommend adding more experiments on encoder-only scenarios to improve its real-world contribution.\\n\\nOverall, all the reviewers who engaged in the discussion reached a positive consensus, and the opinion from the negative reviewer looks a manageable issue. I recommend acceptance for this paper.\", \"additional_comments_on_reviewer_discussion\": [\"There were several concerns from the reviewers. For example:\", \"[RQs8, mL6L] Writing quality\", \"[MFiX, 9UE5] No detailed study on the sensitivity of the threshold selection / validation set size / Patience-based vs. confidence-based\", \"[MFiX] A non-realistic speed-up metric and decoder-only exit scenario would be problematic.\", \"[9UE5] The tasks would not be generalized to the other tasks\", \"[9UE5] The aggregation of confidence scores may need additional computational complexity.\", \"Reviewer RQs8, mL6L and MFiX acknowledged that most of their concerns were resolved after reading the authors' responses. Specifically, the authors provided ablation studies on the validation set size and the design choice (Patience-based vs. confidence-based).\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear reviewers,\\n\\nAs the discussion phase is ending soon, we request you to acknowledge our rebuttal and let us know if you have any further questions.\\n\\nRegards,\\n\\nAuthors\"}", "{\"title\": \"Further clarifications\", \"comment\": \"Thanks for the acknowledgement\\n\\nSure, your suggestion is logical and that could also be explored. But please note that your suggestion is a different way of combining the confidence scored of individual classifiers. Multiple such combinations can be defined. Still, we will explore the idea given by you.\", \"for_the_second_ques\": \"We chose 9th layer specifically as for the image captioning task the performance of the 6 th layer was really poor and the results made no sense to compare with the final layer. For 9th layer, it was pretty better and comparable.\\n\\nOnce again thanks for the acknowledgement.\"}", "{\"summary\": \"BEEM presents an approach for reducing the latency of model inference by allowing classification decisions to be made at multiple points along the model, with early termination occurring if classification confidence exceeds a threshold. Instead of treating each early exit point independently, BEEM also considers a consensus along a span of exit points to be sufficient for early exiting, even if the confidence of the current exit point is below the threshold that would allow exit by itself.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Treating early exiting as an ensemble based consensus mechanism is a good idea, and weighting those exit classifiers based on their ability to classify effectively is also a good idea. As best I can tell, the approach does not introduce hyperparameters without providing a rigorous way to arrive at them, which I think helps with the general utility of the approach. Given that classifiers and next-token generation are major parts of modern AI, and this approach works on both, I think this work has significance.\", \"weaknesses\": \"I don't feel like there are any showstopping weaknesses that call into question the acceptability of the paper in general. So I will limit it to some constructive feedback, and then hopefully the authors can speak to my questions below.\", \"figure_1\": [\"I found this figure a little bit hard to parse, even after reading the rest of the paper. I think that if you had an arrow from the 0.065 box to the addition operator above it, it would be much more obvious what\\u2019s happening.\", \"Section 3.4.2:\", \"Line 262.5: Should it be c_stop or c_stop^t?\", \"Should c_misc^t be divided by c_stop on line 264, as it\\u2019s meant to be the count (\\u201crepresents the number of samples \\u2026\\u201d), and thus `c_misc^t / c_stop^t` would be the `p^t` error rate? Otherwise we\\u2019re looking at `c_misc^t / c_stop^{2 (t)}` which doesn\\u2019t seem to be right.\"], \"section_4\": \"Why was DNN-9L chosen? Probably a minor question, but unless that comes from a related work, I'm not sure where the 9 came from.\", \"nit\": [\"Wrong parenthesis form on line 319\", \"If you have space, it might be good to introduce what BEEM-A and BEEM-C mean in Table 1, as it doesn\\u2019t show up until the end of section 4.1.Metric.\", \"Line 445 \\u201cpoo [sic]\\u201d typo\"], \"questions\": \"Equation 2:\\nWhy do we only track the max prediction chain? Alternatively, could we just continue to aggregate weighted confidences until one of them reaches threshold? For example, if the early parts of the model are trying to decide between two classes, and thus have comparable (but oscillating max) confidence, in BEEM, these oscillations cause the S_i score to stay small, and thus requiring a deeper traversal once the two classes are resolved into one (as we need to build confidence for the single resolved class).\", \"general\": \"It seems intuitive that perhaps some layers are only specifically good at EE-classifying a subset of classes. Have you observed any behavior like this? It would tie into my question about equation 2, which is that because you're relying on a chain of consistent max labels, that uneven error distributions could be harming your overall approach.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thanks for the insightful comments.\", \"q1\": \"Speed-up factor is only computed based on the theoretical analysis (line 412), which is very misleading, especially for image captioning tasks where exit is only attached to the decoder but not encoder (line 201). In this case, do you count speed up only based on decoder layers? This is problematic, for example the baseline method MuE in Table3 applies early exit to both the encoder and the decoder (Figure2 of https://arxiv.org/pdf/2211.11152) and how is that calculated in line 386? Or do you also count encoder layers as a constant in this case as the inference cost is never reduced for encoders? If that\\u2019s the case, then it\\u2019s also not accurate due to the shape mismatch between the vision encoder and the language decoder. It would be more useful to report the actual end-to-end inference time speed up with and without early exit.\", \"a1\": \"**Speedp metric:** We acknowledge that the end-to-end time would have been a better metric but we have considered the Speedup factor due to the facts that 1) we wanted to be consistent with the existing methods such as DeeCAP and MuE. 2) Also, it can easily be converted into multiple metrics like expected time and also proportional to wall clock time. Existing methods such as DeeCAP, MuE, PABEE, DeeBERT etc. already explain its benefits that made it a standard metric for judging Early Exit speedup.\\n\\n**Why only Decoder Speedup:** The encoder speed is always a constant value i.e. 1 as there is no speedup observed in the encoder. It does not depend on the shape of the encoder and could be considered constant across all the baselines (as done by existing baselines except MuE). \\n\\nNote that, the encoder cost is very minimal as compared to the decoder in the image captioning tasks with autoregressive decoding as after one pass on the encoder, the decoder makes multiple forward passes until the end-of-sentence token is predicted, hence in autoregressive decoding the major part where we need to reduce the inference time is by adding exits to the decoder.\\n\\nFinally, the early exit approach in MuE cannot be extended to the models where the shape of the output of the encoder or decoder changes at every layer as we cannot compute the similarity score. This is the case with our setup as well. The Swin Transformer is used as an encoder in our method, which has a different shape output at every layer, hence the MuE approach could not be extended to our encoder.\", \"q2\": \"Per-layer threshold tuning depends on the validation set of each downstream task, which makes it not practical for broad use cases (e.g. zero-shot or few-shot tasks). Section 6.2 compares vallina threshold selection with the proposed threshold selection method, do you also have study showing the impact of validation set size? For example, if reduce the size of the validation set, would that significantly affect the proposed threshold setting method quality?\", \"a2\": \"Validation set for fine-tuning thresholds: Yes, for the early exit models, we require a training dataset, as the weights of the classifiers need to be learned, these models require some amount of fine-tuning to train the weights of the attached early exits. To the best of our knowledge, most of the works perform fine-tuning of the backbone with exits before the inference, which requires a training set. From that training dataset, we take a small split of the dataset as validation that is used for setting the threshold.\\n\\n**Ablation study: **Thanks for the suggestion about the ablation study on the size of the validation dataset, here is a table that contains the required results.\\n\\n| | SST-2 | | MNLI ||\\n| ------------ | ----- | ----- | ----| -----|\\n| Val_set_size | Acc | Speed | Acc | Speed |\\n| 500 | 91.9 | 1.93 | 83.1 | 2.07 |\\n| 1000 | 92.4 | 1.92 | 83.9 | 2.01 |\\n| 2000 | 92.5 | 1.92 | 84.1 | 1.99 |\\n| 4000 | 92.6 | 1.91 | 84.1 | 1.96 |\\n\\n\\nThe table shows the impact on the performance when the validation set size is reduced of the training dataset. Note from the table that our method of setting threshold is not too sensitive on the size of the validation set until it is extremely small such as only 500 samples used as the validation dataset. The drop in such cases is due to the fact that the optimization algorithm gets a very small number of samples for optimality in general. However, note that we expect that with such large datasets available we believe that the size of the validation set will be greater than 500 samples.\\n\\nMore clarifications follow in the reply to this.\"}", "{\"title\": \"Thanks\", \"comment\": \"Thanks for acknowledgement of the rebuttal and increasing the score.\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear reviewer,\\n\\nIt is a gentle reminder to acknowledge our rebuttal and make necessary reassessment of the score. Again, we thank you for your time and effort in reviewing our work.\\n\\nRegards,\\n\\nAuthors\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thanks for the insightful comments.\", \"q1\": \"While the paper demonstrates strong performance on NLP and image captioning tasks, it is unclear how well BEEM generalizes to other types of tasks or datasets, which could be a limitation in its applicability.\", \"a1\": \"We have provided results on various NLP classification tasks and image captioning tasks which are two major standard tasks in deep learning. However, note our proposed confidence metric does not depend on the model or task at hand and can be applied to any task and model. Another thing that we propose is the choice of threshold which again does not depend on the model or tasks at hand and can be used for any task and model architecture. Since there is no dependence of our method on the task type, it is easily generalizable to the tasks that the chosen model is capable of doing. To prove this, we have experimented with two different domains: NLP and image. In NLP we have considered three different tasks: entailment classification, sentiment analysis, natural language inference task.\\n\\nStill, if you want us to provide results on a particular type of dataset, please let us know, and we will add that into the paper.\", \"q2\": \"The paper could provide more details on the computational complexity of implementing BEEM, especially regarding the aggregation of confidence scores and the potential overhead it introduces.\", \"a2\": \"We believe that there will be no additional computational overhead as compared to existing EE methods since, for instance consider the confidence metric by DeeBERT where the confidence is checked at every layer, we just store this confidence value, hence no additional computation. Similarly, PABEE checks the consistency in prediction, it also stores confidence values without additional computational complexity. Hence, there is no additional complexity in storing the confidence values.\\n\\nIf you are talking about the computational cost of getting the confidence values, it is also minimal as compared to backbone layers, as it consists of only a single linear with a task to map the hidden representations to the output classes. Due to this simplicity, existing methods also do not consider the computational cost of the exits. We are using similar architectures for the exits as already used in existing methods. For fair comparison, we are using the same metrics as used by earlier methods such as speedup to decide the reduction in computational cost.\", \"q3\": \"The performance of BEEM is highly dependent on the choice of thresholds, and the paper could benefit from a deeper exploration of how sensitive the model is to these choices.\", \"a3\": \"The threshold in our setup is obtained in our method by an optimization problem. The only thing that remains to explore is how sensitive is the optimization problem solution to the validation set size that we show in the table below.\\n\\n| | SST-2 | | MNLI ||\\n| ------------ | ----- | ----- | ----| -----|\\n| Val_set_size | Acc | Speed | Acc | Speed |\\n| 500 | 91.9 | 1.93 | 83.1 | 2.07 |\\n| 1000 | 92.4 | 1.92 | 83.9 | 2.01 |\\n| 2000 | 92.5 | 1.92 | 84.1 | 1.99 |\\n| 4000 | 92.6 | 1.91 | 84.1 | 1.96 |\\n\\nNote from the table that the val set size has very small impact on the performance of the model making our method less sensitive to the val set. \\nWe also establish that using our threshold the overall accuracy is better than that of the last layer. Hence we provide a sound method for the choice of threshold which is missing in earlier works. \\n\\nAn ablation study over this has been made in Table 5 and Section 6.2 in our work where we detail the effect of setting the threshold by solving the optimization problem vs if the thresholds were chosen without any objective function. If you need any other type of ablation, please let us know, and we will add it in the paper.\\n\\nWe hope that we clarified most of your concerns and doubts. Please let us know if you have any further questions and we will be happy to answer that. However, we are surprised that you have not even raised any major concerns and given overall positive feedback to our approach, still, you gave us a rating of 5. We request that you please reassess the score if you are happy with our contribution. Again, we thank you for your time and efforts in assessing our work.\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear reviewer 9UE5,\\n\\nIt is a gentle reminder to acknowledge our rebuttal. Again, we thank you for your time and effort in reviewing our work.\\n\\nRegards,\\n\\nAuthors\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thanks for the comments.\", \"ques_1\": \"The introduction and abstract delve too deeply into technical details from the outset. Consider starting with a high-level overview before moving into specifics, which are available in detail later.\", \"ans_1\": \"We started off our paper with a not so basic introduction as Early Exits is one of the very prominent dynamic inference methods and started off in the 2016. There is a lot of literature and even surveys on EE. We also wanted to make our paper technically sound hence a detailed introduction was crucial.\", \"ques_2\": \"Rework Figure 1: It should be consistent with other figures. For instance, S1 should connect to S2 with an arrow.\", \"ans_2\": \"We have added an arrow in the figure 1 and is visible in the newly uploaded pdf, please have a look.\", \"ques_3\": \"How are parameters denoted for exit classifiers?\", \"ans_3\": \"As we have stated in line 168 that $\\\\theta$ denotes the set of all the parameters, it also consists of the parameters of the exit classifiers.\", \"ques_4\": \"Are they trained independently or jointly with the model?\", \"ans_4\": \"They are jointly trained with the model as seen in the objective function for training.\", \"ques\": \"Are the exit classifiers linear?\", \"ans\": \"Yes they are linear as stated in lines 392-393.\", \"ques_5\": \"Additionally, a brief (one-sentence) explanation of why the KL term is relevant would - improve clarity.\", \"ans_5\": \"We have added \\u201cKL divergence (Kullback-Leibler divergence) is used in knowledge distillation because it measures how well one probability distribution (the student model's predictions) approximates another (the teacher model's predictions). In the context of knowledge distillation, KL divergence serves as a key component to transfer \\\"soft knowledge\\\" from the teacher to the student.\\u201d in lines 171-174 of our revised version.\", \"ques_6\": \"Is the inference algorithm novel here, or adapted from previous work?\", \"ans_6\": \"Yes, the inference algorithm is novel as we have proposed a different method to measure the confidence metric that ensembles multiple classifiers to decide exiting. Als, we set the threshold in a different novel manner as compared to other methods.\", \"ques_7\": \"Describe the task (e.g., image captioning) first before discussing specific architectures.\", \"ans_7\": \"We have added \\u201cFor the image captioning task where the objective is to generate a caption for an input image,\\u201d in lines 194 of our revised version.\", \"ques_8\": \"Line 219: Statements like \\\"confidence-based early exit methods like DeeBERT and ElasticBERT...\\\" lack supporting evidence. Many claims are subjective, without empirical or theoretical backing.\", \"ans_8\": \"For supporting this, we have already cited the DeeBERT and ElasticBERT paper in multiple places in the paper such as: line 150 and line 046 etc. It would be helpful if you specify the subjective claim that needs further clarifications.\", \"ques_9\": \"Lines 256\\u2013259 are confusing; rephrase them. For instance, what is L? Is it the number of layers? Why is S_i < L important? What does i signify?\", \"ans_9\": \"We have rephrased them as \\u201c The values of $w_i\\\\in [0,1]$, $C_i\\\\in [0,1]$ imply that $S_i \\\\leq L$ i.e. the score at any exit layer $i$ cannot be greater than the number of layers $L$ as $S_i$ is a multiple of two values between $0$ and $1$, it becomes very small and is added almost $L$ times. We choose the best-performing threshold on the validation set in terms of accuracy.\\u201d Please suggest if there are more changes required.\", \"ques_10\": \"Use \\\\citep instead of \\\\citet when appropriate (e.g., Line 46).\", \"ans_10\": \"We have updated it in the revised version submitted now.\", \"ques_11\": \"Line 215: There's no Equation 5.\", \"ans_11\": \"We fixed this in the revised version.\", \"ques_12\": \"Theorem 3.1: Avoid using t for the layer index, as it was previously used for tokens (Line 205). Clarify terms a_t and b_t, which are hard to interpret.\", \"ans_12\": \"We have fixed this in the revised version.\", \"ques_13\": \"Table 5 should appear after Table 4.\", \"ans_13\": \"Fixed this issue in the revised (newly submitted version).\\n\\nWe have fixed most of the presentation issues, please have a look by downloading the newest version and suggest if there are any further changes required. We hope that we answered most of your doubts. Please let us know if you have any further questions. If not, we request you to please reassess the scores.\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear reviewer 9UE5,\\n\\nIt is a gentle reminder to acknowledge our rebuttal. Again, we thank you for your time and effort in reviewing our work.\\n\\nRegards,\\n\\nAuthors\"}", "{\"comment\": \"Thank you.\", \"question_3\": \"Given that you're using the rule that accuracy must improve over the baseline, and using that to set thresholds, wouldn't that cause the $w_i$ to be small for early layers with potentially spurious representations? To be clear, I don't think this is a disqualifier for the work presented in the paper, but rather whether you've considered the approach, and if so, verified that it's worse than maxpred due to false confidence.\", \"question_4\": \"I suppose the question I was asking was \\\"why 9 specifically\\\". Looking at table 3 again, I suppose it's because the 5 comparison (x)BERT methods all have speedup ratios around 1.33, so their expected exit point is roughly layer 9. For table 1, I think the choice of 9 looks more arbitrary as most methods have higher speedup ratios, and so a choice of something achieving a speedup ratio of around 1.7-1.8 would seem more suitable in the ALBERT group. Similarly for table 2, a choice of speedup around 1.6x might seem more natural.\\n\\nOverall, I'm still inclined to keep my current score the same.\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear reviewer 9UE5,\\n\\nAs the discussion phase is ending soon, it is a gentle reminder to acknowledge our rebuttal. Again, we thank you for your time and effort in reviewing our work.\\n\\nRegards,\\n\\nAuthors\"}", "{\"summary\": \"The paper introduces BEEM, a novel framework for enhancing the performance of Early Exit Deep Neural Networks (DNNs) by treating exit classifiers as experts. The key idea is to aggregate confidence scores from these exit classifiers, making exit decisions based on the ensemble effect and consistency of predictions. A sample exits the network when the aggregated confidence exceeds a predefined threshold, which is determined by the error rates of intermediate exits. The paper presents experimental results on the COCO dataset for image captioning and GLUE datasets for language tasks, showing improvements in speed-up by a factor of 1.5\\u00d7 to 2.1\\u00d7 and comparable or improved accuracy compared to the final layer of DNNs. The significance lies in its ability to reduce inference latency while maintaining or enhancing accuracy, particularly useful for resource-constrained environments.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper introduces BEEM, a novel framework for enhancing the performance of Early Exit Deep Neural Networks (DNNs) by treating exit classifiers as experts. The key idea is to aggregate confidence scores from these exit classifiers, making exit decisions based on the ensemble effect and consistency of predictions. A sample exits the network when the aggregated confidence exceeds a predefined threshold, which is determined by the error rates of intermediate exits. The paper presents experimental results on the COCO dataset for image captioning and GLUE datasets for language tasks, showing improvements in speed-up by a factor of 1.5\\u00d7 to 2.1\\u00d7 and comparable or improved accuracy compared to the final layer of DNNs. The significance lies in its ability to reduce inference latency while maintaining or enhancing accuracy, particularly useful for resource-constrained environments.\", \"weaknesses\": \"1. The paper provides extensive experimental results demonstrating the effectiveness of BEEM in improving inference speed and accuracy across various NLP tasks and image captioning, which strengthens the credibility of the proposed method.\\n\\n2. The theoretical analysis providing conditions under which BEEM outperforms standard DNN inference adds depth to the paper and offers insights into its underlying mechanisms.\\n\\n3. The paper is well-organized, with clear explanations of the methodology, experiments, and results, making it accessible to readers.\", \"questions\": \"1. While the paper demonstrates strong performance on NLP and image captioning tasks, it is unclear how well BEEM generalizes to other types of tasks or datasets, which could be a limitation in its applicability.\\n\\n2. The paper could provide more details on the computational complexity of implementing BEEM, especially regarding the aggregation of confidence scores and the potential overhead it introduces.\\n\\n3. The performance of BEEM is highly dependent on the choice of thresholds, and the paper could benefit from a deeper exploration of how sensitive the model is to these choices.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the response and supporting results of various validation set sizes. They addressed my concerns, thus I increase my score from 5 to 6.\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear reviewer MFiX,\\n\\nIt is a gentle reminder to acknowledge our rebuttal and make necessary reassessment of the score. Again, we thank you for your time and effort in reviewing our work.\\n\\nRegards,\\n\\nAuthors\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear reviewer,\\n\\nIt is a gentle reminder to acknowledge our rebuttal. Again, we thank you for your time and effort in reviewing our work.\\n\\nRegards,\\n\\nAuthors\"}", "{\"title\": \"Thanks\", \"comment\": \"Thanks for the acknowledgement and increasing the score.\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear reviewer 9UE5,\\n\\nIt is a gentle reminder to acknowledge our rebuttal. Again, we thank you for your time and effort in reviewing our work.\\n\\nRegards,\\n\\nAuthors\"}", "{\"summary\": \"The authors propose a new decision criterion, BEEM, where exit classifiers are considered experts whose confidence scores are aggregated. A sample exits when the aggregated confidence surpasses a set threshold. The paper evaluates this approach on the COCO dataset for image captioning and GLUE datasets for various transformer-based language tasks.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": [\"The results look promising, as shown in Tables 1, 2, and 3.\", \"As this area is outside my expertise, I would defer to my fellow reviewers on:\", \"The relevance of the benchmark tasks and datasets used in the paper\", \"The significance of the results\", \"Any potential biases or issues within the experimental setup\"], \"weaknesses\": [\"Presentation issues:\", \"The introduction and abstract delve too deeply into technical details from the outset. Consider starting with a high-level overview before moving into specifics, which are available in detail later.\", \"Rework Figure 1: It should be consistent with other figures. For instance, S1 should connect to S2 with an arrow.\", \"The title of Section 3 (\\\"Problem Setup\\\") feels off.\", \"Section 3.1: The setup is challenging to follow. How are parameters denoted for exit classifiers? Are they trained independently or jointly with the model? Are the exit classifiers linear? Additionally, a brief (one-sentence) explanation of why the KL term is relevant would - improve clarity. Is the inference algorithm novel here, or adapted from previous work?\", \"Section 3.2: Describe the task (e.g., image captioning) first before discussing specific architectures.\", \"Line 219: Statements like \\\"confidence-based early exit methods like DeeBERT and ElasticBERT...\\\" lack supporting evidence. Many claims are subjective, without empirical or theoretical backing.\", \"Section 3.3: The cost vector isn't \\\"learned\\\" in the typical sense; it's set by the model developer. Adjust the wording accordingly.\", \"Lines 256\\u2013259 are confusing; rephrase them. For instance, what is L? Is it the number of layers? Why is S_i < L important? What does i signify?\", \"In Section 3, contributions are not clearly separated from known results in the literature. Improved subsectioning could help delineate the original contributions.\"], \"nit\": [\"Use \\\\citep instead of \\\\citet when appropriate (e.g., Line 46).\", \"Line 215: There's no Equation 5.\", \"Theorem 3.1: Avoid using t for the layer index, as it was previously used for tokens (Line 205). Clarify terms a_t and b_t, which are hard to interpret.\", \"Table 5 should appear after Table 4.\"], \"questions\": \"The paper seems somewhat rushed, with presentation issues that would benefit from an additional round of revision (resubmission basically). From my past experiences, these are unlikely to be fully addressed during the rebuttal period. Given the positive empirical results, I believe the paper would be stronger with more attention to clarity in writing, figure design, and structural organisation.\\n\\nHow much are you prepared to revise the presentation of the paper? If you are aiming for acceptance, please outline a specific plan to improve its readability and structure.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thanks for the positive remarks and constructive feedback.\", \"que1\": \"Line 262.5: Should it be c_stop or c_stop^t?\\nShould c_misc^t be divided by c_stop on line 264, as it\\u2019s meant to be the count (\\u201crepresents the number of samples \\u2026\\u201d), and thus c_misc^t / c_stop^t would be the p^t error rate? Otherwise we\\u2019re looking at c_misc^t / c_stop^{2 (t)} which doesn\\u2019t seem to be right.\", \"ans_1\": \"Thanks for pointing it out, yes this is a typo and we will fix that in the final version.\", \"que_2\": \"It perhaps would be nice to see the early exit behaviors of some of the techniques with a histogram over exit points, for example. Or even simply the average exit layer over each dataset.\", \"ans_2\": \"Sure, we will add a histogram with number of exiting samples at different layers in the Appendix. Also, please note that we can find the average layer required for a dataset by reversing the speedup and multiplying it by number of layers in the backbone. Speedup is a metric that can easily be converted to different metrics such as expected time taken, expected time reduction rate and average number of layers required.\", \"que_3\": \"Equation 2: Why do we only track the max prediction chain? Alternatively, could we just continue to aggregate weighted confidences until one of them reaches threshold? For example, if the early parts of the model are trying to decide between two classes, and thus have comparable (but oscillating max) confidence, in BEEM, these oscillations cause the S_i score to stay small, and thus requiring a deeper traversal once the two classes are resolved into one (as we need to build confidence for the single resolved class).\", \"ans_3\": \"Yes that could also be one possibility, but the reason why we do not want to aggregate the confidence until the prediction is consistent is that the model can gain fake confidence for hard samples due to not so good features extracted at initial layers. Hence until the model is consistent in its predictions, we do not want to aggregate the confidence. This reduces the chances of fake confidence being aggregated from the initial layers.\", \"que_4\": \"Section 4: Why was DNN-9L chosen? Probably a minor question, but unless that comes from a related work, I'm not sure where the 9 came from.\", \"ans_4\": \"DNN-9L is denoted as if all the predictions were made at the 9th layer of the model. This baseline is to show the reduction in performance if the layers are statically reduced instead of dynamic reduction in layers. This proves the importance of dynamic methods.\", \"que_5\": \"General: It seems intuitive that perhaps some layers are only specifically good at EE-classifying a subset of classes. Have you observed any behavior like this? It would tie into my question about equation 2, which is that because you're relying on a chain of consistent max labels, that uneven error distributions could be harming your overall approach.\", \"ans_5\": \"We had not observed this in text classification task as there are less than 2-5 classes. However, for the image captioning tasks, we had observed that more common tokens such as \\u2018the\\u2019, \\u2018a\\u2019 were exiting at initial layers while the rare tokens or not so common tokens such as \\u2018cake\\u2019, \\u2018knife\\u2019 were exited deeper into the backbone.\\n\\nWe hope that we clarified your doubts, please let us know if any further clarifications are required. Once again thanks for your time and efforts in assessing our work.\"}", "{\"comment\": \"LGTM, thanks for the updates. Score 5-->6\"}", "{\"summary\": \"This paper studies the early exit (EE) technologies for language tasks and image captioning tasks. More specifically, instead of making decisions based on individual intermediate layers, this paper proposed to use weighted sum of multiple intermediate layers to achieve stronger results. Thresholds for each layer are set with error rate restrictions on the validation set, to make sure each exit classifier (on early exited examples) performs no worse than the final layer. Speed-up improvements are achieved on both language and image captioning tasks, while on language tasks there\\u2019s also a tiny quality boost.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Extensive experiments are conducted on both language tasks and the image captioning task, with source code attached and plenty of baseline results reported.\", \"The proposed early exit method not only speeds up the inference, but also improves model quality on language tasks.\"], \"weaknesses\": [\"Speed-up factor is only computed based on the theoretical analysis (line 412), which is very misleading, especially for image captioning tasks where exit is only attached to the decoder but not encoder (line 201). In this case, do you count speed up only based on decoder layers? This is problematic, for example the baseline method MuE in Table3 applies early exit to both the encoder and the decoder (Figure2 of https://arxiv.org/pdf/2211.11152) and how is that calculated in line 386? Or do you also count encoder layers as a constant in this case as the inference cost is never reduced for encoders? If that\\u2019s the case, then it\\u2019s also not accurate due to the shape mismatch between the vision encoder and the language decoder. It would be more useful to report the actual end-to-end inference time speed up with and without early exit.\", \"Per-layer threshold tuning depends on the validation set of each downstream task, which makes it not practical for broad use cases (e.g. zero-shot or few-shot tasks). Section 6.2 compares vallina threshold selection with the proposed threshold selection method, do you also have study showing the impact of validation set size? For example, if reduce the size of the validation set, would that significantly affect the proposed threshold setting method quality?\", \"The proposed method is incremental, where it differs from the baseline methods of whether or not to use a weighted sum of the exit classifiers. And there\\u2019s also no ablation test to verify this point of using a weighted sum versus using simply individual exit classifiers. It's unclear whether we should treat DeeBERT and PABEE as ablation tests because the results are produced with the existing code (line 376) instead of a comparable setup (e.g. exactly the same code from this paper).\"], \"questions\": [\"In line 412, to calculate speed-up factors for image captioning tasks, how do you distinguish the cost from the vision encoder and the language encoder / decoder?\", \"Are the vision / language backbones fine-tuned or frozen (where only the new linear layers are trainable)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
EzjsoomYEb
Topological Blindspots: Understanding and Extending Topological Deep Learning Through the Lens of Expressivity
[ "Yam Eitan", "Yoav Gelberg", "Guy Bar-Shalom", "Fabrizio Frasca", "Michael M. Bronstein", "Haggai Maron" ]
Topological deep learning (TDL) is a rapidly growing field that seeks to leverage topological structure in data and facilitate learning from data supported on topological objects, ranging from molecules to 3D shapes. Most TDL architectures can be unified under the framework of higher-order message-passing (HOMP), which generalizes graph message-passing to higher-order domains. In the first part of the paper, we explore HOMP's expressive power from a topological perspective, demonstrating the framework's inability to capture fundamental topological and metric invariants such as diameter, orientability, planarity, and homology. In addition, we demonstrate HOMP's limitations in fully leveraging lifting and pooling methods on graphs. To the best of our knowledge, this is the first work to study the expressivity of TDL from a topological perspective. In the second part of the paper, we develop two new classes of architectures -- multi-cellular networks (MCN) and scalable MCN (SMCN) -- which draw inspiration from expressive GNNs. MCN can reach full expressivity, but scaling it to large data objects can be computationally expansive. Designed as a more scalable alternative, SMCN still mitigates many of HOMP's expressivity limitations. Finally, we design new benchmarks for evaluating models based on their ability to learn topological properties of complexes. We then evaluate SMCN on these benchmarks as well as on real-world graph datasets, demonstrating improvements over both HOMP baselines and expressive graph methods, highlighting the value of expressively leveraging topological information.
[ "Topological Deep Learning", "Message Passing", "Higher Order Message Passing", "Expressivity", "Graph Neural Networks", "GNNs", "Topology", "Homology", "Symmetry" ]
Accept (Oral)
https://openreview.net/pdf?id=EzjsoomYEb
https://openreview.net/forum?id=EzjsoomYEb
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wbDeYUROPx", "rmaVdZaRNi", "hiVU1vp1eY", "ZfgRybjoRc", "Wi5nCdRF0P", "TvNnuF95Aq", "TiLHLTMDC1", "SeBcjU4vT3", "P9mnuGxLro", "JxWhkfOLIq", "JXMMbKB4fZ", "IyqdmnTZJ9", "IxT7tLgnyb", "GQQ9p1w7f0", "0NeJ9IQ0JI", "0HnmHWlxsE" ], "note_type": [ "decision", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1737523647274, 1731955348059, 1730667270075, 1732463611384, 1734026647446, 1731865129593, 1731867701317, 1731867685009, 1732560383290, 1731866205628, 1731867451211, 1730582985338, 1731887863233, 1731865398234, 1731863843342, 1730599358367 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4548/Reviewer_TNgH" ], [ "ICLR.cc/2025/Conference/Submission4548/Reviewer_TNgH" ], [ "ICLR.cc/2025/Conference/Submission4548/Authors" ], [ "ICLR.cc/2025/Conference/Submission4548/Area_Chair_fY7u" ], [ "ICLR.cc/2025/Conference/Submission4548/Authors" ], [ "ICLR.cc/2025/Conference/Submission4548/Authors" ], [ "ICLR.cc/2025/Conference/Submission4548/Authors" ], [ "ICLR.cc/2025/Conference/Submission4548/Reviewer_sF9V" ], [ "ICLR.cc/2025/Conference/Submission4548/Authors" ], [ "ICLR.cc/2025/Conference/Submission4548/Authors" ], [ "ICLR.cc/2025/Conference/Submission4548/Reviewer_8Wou" ], [ "ICLR.cc/2025/Conference/Submission4548/Reviewer_8Wou" ], [ "ICLR.cc/2025/Conference/Submission4548/Authors" ], [ "ICLR.cc/2025/Conference/Submission4548/Authors" ], [ "ICLR.cc/2025/Conference/Submission4548/Reviewer_sF9V" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}", "{\"comment\": \"I thank the authors for their thorough rebuttal. All my questions and concerns have been addressed clearly and thoughtfully, and I am raising my score accordingly.\"}", "{\"summary\": \"This work studies the expressivity of Topological Deep Learning (TDL) architectures, particularly focusing on the limitations of Higher-Order Message Passing (HOMP) for distinguishing combinatorial complexes. The first half of the paper extends Bamberger\\u2019s (2022) work on the expressivity limitations of message-passing Graph Neural Networks (GNNs), which characterized, using covering maps, graphs that GNNs cannot distinguish. In a similar vein, this paper reveals \\\"topological blindspots\\\" in HOMP frameworks: (1) complexes that share a cover are indistinguishable by HOMP, and (2) HOMP cannot distinguish complexes that differ in important topological and metric properties such as diameter, orientability, planarity, and homology. In the second half, the authors address these limitations by adapting techniques from expressive graph architectures that process features over tuples of nodes. Similarly, the work extends HOMP with multi-cellular feature spaces and equivariant linear updates. This extension, Multi-Cellular Networks (MCN), achieves full expressive power, allowing it to (1) distinguish non-isomorphic complexes and (2) differentiate complexes based on properties like diameter, 0-th homology group, and also distinguish between a Moebius strip and a cylinder (which disagree on planarity). The work also introduces a more computationally scalable version of MCN, aptly called SMCN. Lastly, the authors empirically validate MCN and SMCN on benchmarks designed to capture topological expressivity, demonstrating the superiority of these architectures over standard HOMP and expressive GNN models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"(++++) **Novelty and Relevance**: This work is new and addresses questions of significant importance and urgency for the Topological Deep Learning (TDL) community.\\n\\n(++++) **Theoretical Contribution**: The theoretical contribution is strong, rigorous, and sound. The answers provided to the question considered by the work are satisfying.\\n\\n(++) **Empirical Validation**: The authors validate their models with real-world and synthetic benchmarks designed to capture topological expressivity, demonstrating clear improvements over standard HOMP and expressive GNN models.\\n\\n(++) **New Benchmarks**: The work introduces benchmarks that test models on topological invariants to assess TDL expressivity, which will serve as a valuable tool for the TDL community.\", \"weaknesses\": \"(---) **Presentation**: The architecture descriptions may be opaque for readers unfamiliar with TDL. Section 5 gives examples of CC data that the new multicellular nodes can encode, but it is unclear which nodes and connections should be included and when. For example, the choice of multicellular nodes and connections such as in the example tensor diagrams in Figure 5 would benefit from additional explanation.\\n\\n(---) **Limited Empirical Evaluation**: The proposed architectures are benchmarked on only three-world datasets.\\n\\n(--) **Related Work**: Although Section 4 extends Bamberger\\u2019s (2020) result, this work is only mentioned once in Section 4.1. An earlier mention in Section 2 (Previous Work) would help readers place this work within a broader research context.\", \"questions\": \"1. The authors demonstrate that HOMP can be extended to achieve full or greater expressivity with components that may make the proposed models computationally impractical for large combinatorial complexes. Is this an inherent limitation that comes with achieving full/greater expressivity, or did the authors only intend to demonstrate that such levels of expressivity are achievable and the proposed extensions sufficed for that purpose?\\n2. Could the authors give recommendations or guidelines for which additional multicellular nodes and connections to include in the MCN and SMCN models? For example, how should a practitioner choose and connect multicellular nodes in the MCN and SMCN layers as in the example in Figure 5?\\n3. Could the authors expand their real-world benchmarks with some more tasks/datasets? For example, the TUDatasets or trajectory classification tasks from Bodnar et al. (2021).\\n4. Are the group actions in Section 5 required to be compatible with the underlying complex? For example, are permutations that exchange nodes while fixing higher-dimensional (rank) simplices allowed?\\n5. Could the authors make their code publicly available?\\n6. Could the authors provide a brief account of Bamberger (2020) in the main text to help contextualize how earlier methods relate to and may have motivated the approaches developed in the first half of this work?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Updated Manuscript and Code\", \"comment\": \"Dear reviewers, we express our sincere appreciation for the positive evaluations. We have updated the manuscript incorporating your suggestions, with all changes highlighted in light blue for easy reference. We additionally provide code for three of our experiments in the supplementary material, the entire repo would be added to the camera ready version.\"}", "{\"metareview\": \"The paper provides a thorough examination of the expressivity limitations of Higher-Order Message Passing (HOMP) architectures in Topological Deep Learning. It identifies specific \\\"topological blindspots\\\" in HOMP, such as its inability to distinguish between complexes differing in topological and metric properties like diameter, orientability, planarity, and homology. To address these limitations, the authors propose two new architectures: Multi-Cellular Networks (MCN) and Scalable Multi-Cellular Networks (SMCN). MCN achieves full expressive power, allowing it to distinguish non-isomorphic complexes and differentiate key topological features. SMCN offers a computationally scalable variant of MCN, maintaining strong expressivity while reducing complexity. The authors also introduce benchmarks designed to evaluate topological expressivity, demonstrating empirical improvements of the proposed methods over baseline HOMP and GNN models on these benchmarks and real-world graphs. Overall, this work conducts a strong generalization of the existing extensive studies on GNN expressive power to TDL.\\n\\n**Strengths** The research is strong across many aspects including theoretical analysis, algorithmic design, and extensive evaluation. In particular, the systematic extension of the notion of GNN expressive power to TDL is impressive and useful. \\n\\n**Weaknesses** The main concern is about presentation and clarity. The paper is difficult to follow, particularly for readers unfamiliar with TDL. Complex notations and insufficient explanation of architectural components (e.g., Figure 5 and the superscripts in Figure 7) reduce accessibility. Moreover, in practice, the complexity of the proposed methods, compared with other more efficient models with even better empirical performance should be better justified. \\n\\nOverall, I think this work is a solid theoretical contribution to the field of graph and geometric deep learning, though empirical value deserves further justification and more intuitive exposition is also recommended.\", \"additional_comments_on_reviewer_discussion\": \"The authors address the reviewers' concerns in the discussion, which wins a unanimous acceptance of the work.\"}", "{\"title\": \"Response to Reviewer TNgH\", \"comment\": [\"We thank the reviewer for highlighting the strengths of our paper, particularly its novelty and relevance to the TDL community. We also appreciate the reviewer\\u2019s constructive feedback and address their comments below:\", \"**Limited Empirical Evaluation (W2 + Q3):** While our work has a strong theoretical focus, we also place significant emphasis on experimental validation. Not only have we introduced several novel benchmarks that address a gap in the field (the torus dataset and topological property prediction tasks), but our paper also includes more extensive experimental evaluation compared to typical theory-focused papers. For instance, [4] (Outstanding Paper Award at ICLR 2023) evaluates only on a single real-world dataset (ZINC) alongside synthetic experiments, [5] (Oral Presentation at ICML2023) explores three real-world benchmarks (matching ours) with one synthetic experiment, and [6] (200+ citations) presents no real-world benchmarks at all.\", \"Nevertheless, in response to this feedback, we will make an effort to expand our real-world evaluation in the revised version by adding experiments on several datasets from the TUDatasets repository. Finally, we note that we cannot include the trajectory classification tasks from Bodnar et al. (2021) as these rely on cells with orientation, which is out of the scope of our architecture.\", \"**Presentation (W1 + Q2):** Due to the paper's page limit, we only provided an overview of the architectures in the main text, providing a more in-depth description in the Appendix. Specifically, in Appendix B and C, we provide a more thorough description of the general MCN and SMCN frameworks respectively. In addition, as the SMCN framework offers a versatile space of layers and updates, in Appendix F.1 we focus on two potential types of SMCN implementations and thoroughly describe their exact forward pass. Despite this, we agree the main text explanations could be clearer. In the revised version (to be uploaded within the discussion period), we will:\", \"Add details to section 5 to explain what nodes to use and when for each of the examples.\", \"Expand the description of Figure 5 to better illustrate the relationship between node types and update rules\", \"Add specific guidelines for practitioners on selecting appropriate multicellular structures.\", \"Overall, we observe that SCL layers updating edge features, combined with \\\"sequential tensor diagrams\\\", tend to increase the risk of overfitting but perform well on tasks like ZINC, where the training and test distributions are closely aligned. In contrast, SCL layers that update 2-cells, paired with \\\"parallel tensor diagrams\\\", are more effective for tasks sensitive to overfitting. All relevant definitions, including those of \\\"sequential\\\" and \\\"parallel\\\" tensor diagram are provided in Appendix F1.\", \"**Related Work (W3 + Q6)**: We agree that the connection to Bamberger (2020) should be better contextualized. We will:\", \"Add a dedicated mention in Section 2 explaining Bamberger's work on covering maps.\", \"Clarify how our topological criterion extends their results.\", \"**Computational practicality (Q1):** The trade-off between expressivity and computational complexity is well-documented in machine learning, particularly in graph learning (e.g. [1,2,3]). Our work provides another data point in this trade-off. Moreover, our framework is flexible \\u2013 specific instantiations can make different complexity trade-offs. For example, we can design models whose complexity primarily depends on the number of 2-cells, which are typically much fewer than nodes or edges. This enables practitioners and other researchers to find the optimal expressivity/computation trade-off for their application. We will clarify these points in the revised manuscript. Additionally, as discussed in the general comment, the runtime of our architecture is comparable to that of CIN (a standard HOMP architecture) and is better than that of subgraph GNNs, see next comment for timing analysis.\", \"**Group actions compatibility (Q4):** Thank you for this important question! No, the group actions do not need to be compatible with the underlying complex structure. The group acts on the enumeration of the cells, not on the complex itself. For example, a cell permutation can reorder the cells c\\u2081 = {s\\u2081, s\\u2082}, c\\u2082 = {s\\u2083, s\\u2084} to c\\u2081 = {s\\u2083, s\\u2084}, c\\u2082 = {s\\u2081, s\\u2082}, effectively shuffling the cell labels while preserving the internal structure of each cell. We will clarify this point in the revised text.\", \"**Code availability (Q5):** Yes, we will make our code publicly available as soon as possible. We are currently in the process of cleaning and organizing it to ensure it is user-friendly for public release. We aim to publish the code within the discussion period, and commit to making it available before submission of the camera-ready version.\"]}", "{\"title\": \"References\", \"comment\": \"[1] Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam Srinivasan, Chen Cai, Gopinath Balamurugan, Michael M Bronstein, and Haggai Maron. Equivariant subgraph aggregation networks. arXiv preprint arXiv:2110.02910, 2021.\\n\\n[2] Max Horn, Edward De Brouwer, Michael Moor, Yves Moreau, Bastian Rieck, and Karsten Borg- wardt. Topological graph neural networks. arXiv preprint arXiv:2102.07835, 2021\\n\\n[3] Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. Advances in neural information processing systems, 32, 2019.\\n\\n[4] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 4602\\u20134609, 2019.\\n\\n[5] Theodore Papamarkou, Tolga Birdal, Michael M Bronstein, Gunnar E Carlsson, Justin Curry, Yue Gao, Mustafa Hajij, Roland Kwitt, Pietro Lio, Paolo Di Lorenzo, et al. Position: Topological deep learning is the\\nnew frontier for relational learning. In the Forty-first International Conference on Machine Learning, 2024.\\n\\n[6] Bastian Rieck. On the expressivity of persistent homology in graph learning. arXiv preprint arXiv:2302.09826, 2023.\\n\\n[7] Bohang Zhang, Shengjie Luo, Liwei Wang, and Di He. Rethinking the expressive power of gnns via graph biconnectivity. arXiv preprint arXiv:2301.09505, 2023.\\n\\n[8] Lim, Derek, et al. \\\"Sign and basis invariant networks for spectral graph representation learning.\\\" arXiv preprint arXiv:2202.13013 (2022).\\n\\n[9] Bohang Zhang, Lingxiao Zhao, and Haggai Maron. On the expressive power of spectral invariant graph neural networks. arXiv preprint arXiv:2406.04336, 2024.\\n\\n[10] Bodnar, Cristian, et al. \\\"Weisfeiler and lehman go cellular: Cw networks.\\\" Advances in neural information processing systems 34 (2021): 2625-2640.\\n\\n[11] Bodnar, Cristian, et al. \\\"Weisfeiler and lehman go topological: Message passing simplicial networks.\\\" International Conference on Machine Learning. PMLR, 2021.\\n\\n[12] Hajij, Mustafa, et al. \\\"Topological deep learning: Going beyond graph data.\\\" arXiv preprint arXiv:2206.00606 (2022).\"}", "{\"title\": \"Additional Comments and Timing Analysis\", \"comment\": \"**Additional comments:**\\n\\n- **Learning topological properties in the synthetic experiments (W3):** There is a difference between a model's expressive power and its ability to generalize to unseen test datasets. For instance, MLPs are fully expressive, yet they are not able to achieve perfect accuracy on any task as they are not always able to generalize. The same applies here. That being said, SMCN\\u2019s enhanced topological expressivity directly improves its ability to learn topological properties, as demonstrated in our experiments.\\n\\n- **Readability and comprehension (Q1):** Most of the topological properties mentioned in the paper are easy to intuitively explain but require a long technical discussion to be rigorously defined. Due to length constraints, we only provided intuitive definitions and referred the reader to the Appendix/external sources for a thorough overview. This practice is common in other TDL studies as well (e.g. [2,6]). If the reviewer thinks this would be helpful, we can add full definitions of the topological properties in the Appendix.\\n\\n- **CC covering example (Q2):** We have given an intuitive example of a CC covering in the proof sketch of Theorem 4.3, and depicted it in Figure 3. Concrete constructions of covering maps are given in several instances in appendix A.2, and A.3 (Lemma A.5, Proposition A.12, Proposition A.15 and Proposition A.17) and several more figures depicting coverings are given (Figure 9, Figure 10). Due to the page limit we may not be able to add any of these to the main text, but we will make clearer references to the Appendix for better readability.\\n\\n**Timing analysis:**\\n- Dataset construction times (seconds):\\n - ZINC: 322.21 \\u00b1 8.764\\n - MOLHIV: 678.01 \\u00b1 11.38\\n\\n- Train time (seconds) / performance:\\n| Dataset | SMCN | GNN-SSWL+ | CIN |\\n|--------------|------------------|------------------|------------------|\\n| ZINC\\u00a0 | 7.39 \\u00b1 0.17 / 0.060 \\u00b1 0.004 | 9.65 \\u00b1 0.19 / 0.070 \\u00b1 0.005 | 5.35 \\u00b1 0.33 / 0.079 \\u00b1 0.006 |\\n| MOLHIV | 17.70 \\u00b1 0.42 / 81.16 \\u00b1 0.90 | 51.02 \\u00b1 0.25 / 79.58 \\u00b1 0.35 | 14.34 \\u00b1 0.27 / 80.94 \\u00b1 0.57 |\\n\\n- Test time (seconds) / performance:\\n| Dataset | SMCN | GNN-SSWL+ | CIN |\\n|--------------|------------------|------------------|------------------|\\n| ZINC\\u00a0 | 0.93 \\u00b1 0.08 / 0.060 \\u00b1 0.004 | 1.04 \\u00b1 0.03 / 0.070 \\u00b1 0.005 | 0.71 \\u00b1 0.05 / 0.079 \\u00b1 0.006 |\\n| MOLHIV | 2.09 \\u00b1 0.15 / 81.16 \\u00b1 0.90 | 3.07 \\u00b1 0.03 / 79.58 \\u00b1 0.35 | 2.02 \\u00b1 0.12 / 80.94 \\u00b1 0.57 |\\n\\nAll experiments were done on an NVIDIA A100 48 GB GPU. We used the same lifting procedures and dataset construction for both SMCN and CIN so construction time is identical.\\u00a0SMCN incurs computational overhead of approximately 23% on the MOLHIV benchmark and 38% on ZINC compared to CIN (trade-off for its improved predictive performance). Additionally SMCN consistently outperforms subgraph networks in runtime, achieving a ~2.9x speedup on MOLHIV and a ~1.3x speedup on ZINC.\\u00a0The improvement over subgraph networks stems from the fact\\u00a0SMCN uses significantly fewer subgraphs updates and leverages higher order topological information instead.\"}", "{\"comment\": \"Thank you very much for addressing my questions!\"}", "{\"title\": \"Response to Reviewer sF9V\", \"comment\": \"We greatly appreciate the reviewer\\u2019s positive feedback and constructive criticism. We address the concerns and questions raised by the reviewer below:\\n\\n- **Clarity:** We thank the reviewer for their insightful suggestions! We\\u2019ll integrate these notes in the revised version of the paper (to be uploaded within the discussion period). Specifically, we\\u2019ll make an effort to add further discussion and illustrations to elaborate on neighborhood functions (equations (1), (2) and (3)), multicellular cochains (lines 309 and 310) and group action indices (lines 319 and 320). We note that due to length restrictions, some of these might be added to the Appendix.\\n\\n- **IGN and subgraph GNNs overview:** We\\u2019ll additionally add a deeper overview of IGNs and subgraph GNNs in the Appendix.\\n\\n- **Graph expressivity:** We note that the SMCN and MCN frameworks introduced in the paper are both direct generalizations of the HOMP framework, including CIN [2]. As demonstrated in [2], CIN is strictly more expressive than MPNNs, and this property extends to both SMCN and MCN. Furthermore, the SMCN framework subsumes subgraph-based architectures like ESAN and GNN-SSWL+ [1,6], making it at least as expressive as these methods. The expressive power of these architectures has been thoroughly studied [3,6], providing additional context for understanding the expressivity of SMCN relative to other graph-based approaches. This discussion will be added to the Appendix.\\n\\n- **Figure 5 tensor diagram node clarification (Q1):** These nodes represent types of multicellular cochain spaces, we will elaborate on them in the updated version.\\n\\n- **Figure 7 notation clarification:** Thank you for this comment! The superscript indicates cell/node marking in the complex/graph (e.g., a subgraph layer can process a bag of graphs with marked nodes) will add a clarification in the updated version.\\n\\n- **Runtime analysis:** In the revised version, we will include a wall-clock training and inference time comparison between SMCN, CIN, and subgraph GNNs. For your convenience, we also provide this comparison below:\\n\\n - Dataset construction times (seconds):\\n - ZINC: 322.21 \\u00b1 8.764\\n - MOLHIV: 678.01 \\u00b1 11.38\\n - Train time (seconds) / performance:\\n | Dataset | SMCN | GNN-SSWL+ | CIN |\\n |--------------|------------------|------------------|------------------|\\n | ZINC | 7.39 \\u00b1 0.17 / 0.060 \\u00b1 0.004 | 9.65 \\u00b1 0.19 / 0.070 \\u00b1 0.005 | 5.35 \\u00b1 0.33 / 0.079 \\u00b1 0.006 |\\n | MOLHIV | 17.70 \\u00b1 0.42 / 81.16 \\u00b1 0.90 | 51.02 \\u00b1 0.25 / 79.58 \\u00b1 0.35 | 14.34 \\u00b1 0.27 / 80.94 \\u00b1 0.57 |\\n\\n - Test time (seconds) / performance:\\n | Dataset | SMCN | GNN-SSWL+ | CIN |\\n |--------------|------------------|------------------|------------------|\\n | ZINC | 0.93 \\u00b1 0.08 / 0.060 \\u00b1 0.004 | 1.04 \\u00b1 0.03 / 0.070 \\u00b1 0.005 | 0.71 \\u00b1 0.05 / 0.079 \\u00b1 0.006 |\\n | MOLHIV | 2.09 \\u00b1 0.15 / 81.16 \\u00b1 0.90 | 3.07 \\u00b1 0.03 / 79.58 \\u00b1 0.35 | 2.02 \\u00b1 0.12 / 80.94 \\u00b1 0.57 |\\n\\n All experiments were done on an NVIDIA A100 48 GB GPU. We used the same lifting procedures and dataset construction for both SMCN and CIN so construction time is the identical. SMCN incurs computational overhead of approximately 23% on the MOLHIV benchmark and 38% on ZINC compared to CIN (trade-off for its improved predictive performance). Additionally SMCN consistently outperforms subgraph networks in runtime, achieving a ~2.9x speedup on MOLHIV and a ~1.3x speedup on ZINC. The improvement over subgraph networks stems from the fact SMCN uses fewer subgraph updates and leverages higher order topological information instead.\\n\\n**References:**\\n\\n[1] Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam Srinivasan, Chen Cai, Gopinath Balamurugan, Michael M Bronstein, and Haggai Maron. Equivariant subgraph aggregation networks. arXiv preprint arXiv:2110.02910, 2021.\\n\\n[2] Cristian Bodnar, Fabrizio Frasca, Nina Otter, Yuguang Wang, Pietro Lio, Guido F Montufar, and Michael Bronstein. Weisfeiler and lehman go cellular: Cw networks. Advances in neural information processing systems, 34:2625\\u20132640, 2021.\\n\\n[3] Fabrizio Frasca, Beatrice Bevilacqua, Michael Bronstein, and HaggainMaron. Understanding and extending subgraph gnns by rethinking their symmetries. Advances in Neural Information Processing Systems, 35:31376\\u201331390, 2022.\\n\\n[4] Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. Advances in neural information processing systems, 32, 2019.\\n\\n[5] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 4602\\u20134609, 2019.\\n\\n[6] Bohang Zhang, Guhao Feng, Yiheng Du, Di He, and Liwei Wang. A complete expressiveness hierarchy for subgraph gnns via subgraph weisfeiler-lehman tests. International Conference on Machine Learning, pages 41019\\u201341077. PMLR, 2023.\"}", "{\"comment\": [\"We thank the reviewer for the positive review and constructive feedback. We address the concerns and questions raised by the reviewer below:\", \"**Role of topology in ML (W2):** The reviewer raises an important question that is central to the wider ongoing research in TDL: \\\"What is the significance of topological information to machine learning tasks?\\\" This is indeed an open problem in TDL, highlighted as Open Problem 9 in a recent position paper [5]. In response to this question, numerous TDL approaches attempt to incorporate topological information, either by lifting objects such as graphs into topological spaces (e.g., HOMP models) or to inject specific topological features directly into the learning pipeline [2, 6, 10, 11, 12]. The papers that develop these approaches often show they have desirable theoretical properties, or empirically demonstrate improved performance, suggesting that incorporating topology can meaningfully enhance model capabilities. Section 2 of the paper reviews several methods, which use topological and metric invariants closely related to our discussion. For example, persistent homology [2,6] relies on homology, while architectures such as [7] use metric features closely related to the graph diameter. Additionally, properties like planarity are of interest, as many real-world graphs, such as those representing molecular structures and electrical circuits, are planar. Collectively, these approaches demonstrate both theoretical and practical advantages, often enhancing the expressive power and performance of base models like MPNNs, which do not inherently incorporate topological information.\", \"In this context, our paper makes two key contributions to the TDL literature. First, we show that despite being the de facto standard TDL architecture, HOMP is unable to capture fundamental topological properties\\u2014often the same properties that other methods explicitly integrate into models (e.g. homology in [2,6] and diameter in [7]). This limitation highlights the need to design more expressive TDL architectures to fully investigate topology's contribution to learning processes. To that end, we introduce the SMCN framework, which enables models to access a broader range of topological properties. We demonstrate that this framework improves performance on standard graph benchmarks. These findings advance our understanding of the role of topology in machine learning, suggesting a correlation between a model's ability to capture topological information and its empirical performance over real world benchmarks.\", \"**Runtime concerns (W1):** In the revised version (to be uploaded within the discussion period), we will provide a wall-clock training/inference comparison of all of SMCN, CIN and subgraph GNNs as well as a more emphasized runtime complexity analysis. See the next comment for timing analysis. Additionally, we note that:\", \"Generally, as is common in GNNs, methods that enhance expressivity often come with increased runtime complexity [1,3,4]. Our approach provides another datapoint on this spectrum.\", \"SMCN is a general framework and runtimes can change for different specifications, e.g. for ZINC we use a version of SMCN with runtime complexity of $O(d \\\\cdot n_0 \\\\cdot n_1)$ and for MOLHIV+MOLESOL+synthetic experiments we use a version with runtime complexity of $O(d \\\\cdot n_0 \\\\cdot n_2)$.\", \"As mentioned in proposition 6.6, SMCN architectures with runtime complexity $O(d \\\\cdot n_0 \\\\cdot n_2)$ are still more expressive than HOMP approaches like CIN, but have a better runtime complexity then subgraph methods, since in most relevant, real-world settings $n_2 \\\\ll n_0$ (e.g. number of cycles is smaller then number of nodes).\", \"**Comparison to Yan et. al (Q3):** We thank the reviewer for introducing us to this relevant paper. We will make sure to add a comparison of our approach to this paper in the next version. In terms of experimental results, CycleNet got a score of 0.068 on ZINC while our model got a score of 0.060. In terms of expressive power we provide a proof sketch showing that SMCN is more expressive than CycleNet:\", \"First, recall that CycleNet leverages BasisNet + spectral embedding [8] applied to the 1-Hodge Laplacian of the input graph. The resulting output is then used as edge features, which are subsequently processed by a final MPNN applied to the graph. In cases where the input graphs are simple and undirected, the 1-Hodge laplacian is exactly equal to the 1-0 coadjacency matrix (plus a diagonal term of $2\\\\mathbf{I}$ which can be ignored). SMCN can apply subgraph GNNs to the $co\\\\mathcal{A}_{1,0}$ Hasse graph, and use the resulting features as inputs for an MPNN. This reduces our proof to demonstrating that subgraph GNNs are more expressive than BasisNet. As recently shown in [9], BasisNet + spectral embedding is strictly less expressive than PSWL, a subclass of subgraph GNNs that is itself less expressive than the base subgraph GNN used in SMCN. This completes the proof.\"]}", "{\"summary\": \"The paper presents an in-depth exploration of Topological Deep Learning (TDL) architectures through the lens of Higher-Order Message Passing (HOMP). The authors investigate HOMP's expressivity limitations in capturing topological invariants (e.g., orientability, planarity, diameter, and homology) in combinatorial complexes and propose two novel architectures: Multi-Cellular Networks (MCN) and Scalable Multi-Cellular Networks (SMCN). These architectures aim to enhance HOMP\\u2019s expressivity in distinguishing between topological structures. Empirical evaluations using newly designed benchmarks and real-world graph datasets demonstrate that SMCN outperforms existing models, highlighting the potential of using expressive topological information in TDL.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe authors provide a rigorous examination of HOMP\\u2019s expressivity limitations concerning fundamental topological and metric invariants, establishing the groundwork for understanding the weaknesses in current TDL approaches.\\n\\n2.\\tThe introduction of MCN and SMCN provides new pathways for achieving higher expressivity. The authors demonstrate that MCN can theoretically achieve full expressivity, while SMCN offers a computationally feasible alternative, balancing expressivity with scalability.\\n\\n3.\\tSMCN demonstrates substantial improvements over traditional HOMP and GNNs, validating the model's efficacy in capturing and leveraging topological features in learning tasks.\", \"weaknesses\": \"1.\\tWhile SMCN offers a scalable alternative to MCN, it still encounters significant challenges in managing large and complex combinatorial complexes due to its super-linear scaling with respect to the number of cells. It would be beneficial for the authors to provide a comparative analysis of SMCN\\u2019s runtime performance against existing HOMP methods and other GNNs, such as CIN (Bodnar et al., 2021) and the backbone subgraph GNN used in SMCN.\\n\\n2.\\tThe paper would benefit from a more detailed rationale for why topological invariants\\u2014such as diameter, orientability, planarity, and homology\\u2014are critical for machine learning models to differentiate. Specifically, it would be helpful to address the importance of these invariants in machine learning tasks, either through empirical evidence, theoretical reasoning, or relevant literature.\\n\\n3.\\tThere appears to be a discrepancy between SMCN\\u2019s theoretical expressivity and its empirical accuracy. For instance, it is unclear why SMCN does not achieve full accuracy in predicting cross-diameter and the second Betti number, which warrants further investigation.\", \"questions\": \"1.\\tTo enhance readability and comprehension, the main paper should be more self-contained by clearly explaining topological invariants\\u2014such as diameter, orientability, planarity, and homology. For clarity, consider presenting formulas or examples within the main text (e.g., rather than relying solely on descriptions on line 233).\\n\\n2.\\tIn the introduction of the concept of CC covering, an illustrative case could help readers grasp this concept more intuitively.\\n\\n3.\\tCould the authors discuss the proposed method in relation to relevant research? For instance, [1] proposed a cycle-invariant positional encoding where cell/cycle features are initially encoded using an invariant network and subsequently incorporated into a standard GNN as additional edge features. A comparison would contextualize SMCN\\u2019s contributions within the broader landscape of topological GNN methods.\\n\\n[1] Yan Z, Ma T, Gao L, et al. Cycle invariant positional encoding for graph representation learning. LoG 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for their rebuttal. After reading all the reviews and responses, I think the authors have largely addressed my concerns, and will raise my score accordingly.\"}", "{\"title\": \"Timing Analysis and References\", \"comment\": \"**Timing analysis:**\\n- Dataset construction times (seconds):\\n - ZINC: 322.21 \\u00b1 8.764\\n - MOLHIV: 678.01 \\u00b1 11.38\\n\\n- Train time (seconds) / performance:\\n| Dataset | SMCN | GNN-SSWL+ | CIN |\\n|--------------|------------------|------------------|------------------|\\n| ZINC\\u00a0 | 7.39 \\u00b1 0.17 / 0.060 \\u00b1 0.004 | 9.65 \\u00b1 0.19 / 0.070 \\u00b1 0.005 | 5.35 \\u00b1 0.33 / 0.079 \\u00b1 0.006 |\\n| MOLHIV | 17.70 \\u00b1 0.42 / 81.16 \\u00b1 0.90 | 51.02 \\u00b1 0.25 / 79.58 \\u00b1 0.35 | 14.34 \\u00b1 0.27 / 80.94 \\u00b1 0.57 |\\n\\n- Test time (seconds) / performance:\\n| Dataset | SMCN | GNN-SSWL+ | CIN |\\n|--------------|------------------|------------------|------------------|\\n| ZINC\\u00a0 | 0.93 \\u00b1 0.08 / 0.060 \\u00b1 0.004 | 1.04 \\u00b1 0.03 / 0.070 \\u00b1 0.005 | 0.71 \\u00b1 0.05 / 0.079 \\u00b1 0.006 |\\n| MOLHIV | 2.09 \\u00b1 0.15 / 81.16 \\u00b1 0.90 | 3.07 \\u00b1 0.03 / 79.58 \\u00b1 0.35 | 2.02 \\u00b1 0.12 / 80.94 \\u00b1 0.57 |\\n\\nAll experiments were done on an NVIDIA A100 48 GB GPU. We used the same lifting procedures and dataset construction for both SMCN and CIN so construction time is identical.\\u00a0SMCN incurs computational overhead of approximately 23% on the MOLHIV benchmark and 38% on ZINC compared to CIN (trade-off for its improved predictive performance). Additionally SMCN consistently outperforms subgraph networks in runtime, achieving a ~2.9x speedup on MOLHIV and a ~1.3x speedup on ZINC.\\u00a0The improvement over subgraph networks stems from the fact\\u00a0SMCN uses significantly fewer subgraphs updates and leverages higher order topological information instead.\\n\\n**References:**\\n\\n[1] Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam Srinivasan, Chen Cai, Gopinath Balamurugan, Michael M Bronstein, and Haggai Maron. Equivariant subgraph aggregation networks. arXiv preprint arXiv:2110.02910, 2021.\\n\\n[2] Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. Advances in neural information processing systems, 32, 2019.\\n\\n[3] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 4602\\u20134609, 2019.\\n\\n[4] Zhang, Bohang, et al. \\\"Rethinking the Expressive Power of GNNs via Graph Biconnectivity.\\\" The Eleventh International Conference on Learning Representations.\\n\\n[5]\\u00a0 Omri Puny, Derek Lim, Bobak Kiani, Haggai Maron, and Yaron Lipman. Equivariant polynomials for graph neural networks. In International Conference on Machine Learning, pages 28191\\u201328222. PMLR, 2023.\\n\\n[6] Ralph Abboud, Ismail Ilkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. The surprising power of graph neural networks with random node initialization. arXiv preprint arXiv:2010.01179, 2020.\"}", "{\"title\": \"General Response\", \"comment\": \"We thank all the reviewers for their positive evaluations and constructive feedback. We underscore four key contributions of our paper mentioned by the reviewers:\\n\\n1. Our paper addresses a fundamental and unexplored question in TDL (**TNgH**, **sF9V**).\\n2. Our theory section answers this question, exposing a fundamental limitation of a large class of TDL architectures (**TNgH**, **sF9V**, **8Wou**).\\u00a0\\n3. We construct provably expressive architectures and empirically demonstrate their improved performance (**TNgH**, **8Wou**).\\n4. We construct valuable datasets and benchmarks (**TNgH**, **sF9V**).\\n\\nAdditionally, in response to reviewers' inquiries regarding computational practicality, we conducted a runtime comparison between SMCN, CIN and the backbone subgraph GNNs. SMCN incurs computational overhead of approximately 23% on the MOLHIV benchmark and 38% on ZINC compared to CIN (trade-off for its improved predictive performance). Additionally SMCN consistently outperforms subgraph networks in runtime, achieving a ~2.9x speedup on MOLHIV and a ~1.3x speedup on ZINC.\"}", "{\"summary\": \"The paper presents a new way to define expressivity but from a topological perspective. The paper shows that existing Topological Deep Learning (TDL) models have trouble estimating certain important topological properties such as diameter, orientability, planarity, and homology. The authors then propose a new model called MCN and its scalable version SMCN that can capture the topological properties better. The paper also presents new benchmarks focusing on learning topological properties of complexes.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"4\", \"strengths\": \"The paper presents a novel essential perspective that differentiates TDL and deep learning on graphs, which used to be compared under the same umbrella previously (in terms of graph expressivity and benchmarks). This work partly bridges the gap between TDL and traditional computational topology methods, which largely focus on homology. The paper also presents theoretical insights showing that higher-order message passing is incapable of capturing certain topological metrics. The experiments also support these insights. The experiments also constrain on parameter budgets to show the effectiveness of the method with respect to the baseline. The paper acknowledges the weakness of MCN if we consider higher-order spaces, so the authors present a scalable version that leverages subgraph GNNs to encode higher-order features. The novel benchmarks are a great contribution which can facilitate a better comparison standard for TDL methods.\", \"weaknesses\": \"The paper is hard to follow and the presentation is not good. For example, the authors can be more explicit on lines 309 and 310 and discuss (1), (2), and (3) more instead of stating them. The notations involve many upper scripts and lower scripts while not explaining their purposes clearly make the formula confusing (line 319 and 320). The paper isn\\u2019t self-contained; for example, the paper can discuss more about IGN as the model itself leverages an architecture similar to IGN. The same thing applies to Subgraph GNNs. Please also refer to the question section for further comments on clarity. Also, even when the paper focuses on expressivity from a topological perspective, I think it would be helpful to include a brief section discussing the proposed method with respect to graph expressivity so that there is a smoother transition between deep learning on graphs and TDL. Lastly, please refer to question 3 for experiments on runtime and lifting time.\", \"questions\": \"1. For Figure 5, can you elaborate more on colored nodes of SMCN and MCN? I think this part can be improved to make it clearer for the audience.\\n2. For Figure 7, the superscript for \\\\mathcal{X} and \\\\mathcal{H} isn\\u2019t discussed in the main text, so it is confusing.\\n3. Can the authors comment on the lifting and runtime complexities with respect to CIN? I think the paper only mentions the runtime complexity and neglects the discussion. It is also helpful to include an experiment on wall-clock training/inference time to see the model scalability in practice when comparing with other TDL models.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
EzB0n8aRqI
Towards Better Understanding Open-set Noise in Learning with Noisy Labels
[ "Chen Feng", "Nicu Sebe", "Ioannis Patras" ]
To reduce reliance on labeled data, learning with noisy labels (LNL) has garnered increasing attention. However, most existing works primarily assume that noisy datasets are dominated by closed-set noise, where the true labels of noisy samples come from another known category, thereby overlooking the widespread presence of open-set noise—where the true labels may not belong to any known category. In this paper, we refine the LNL problem by explicitly accounting for the presence of open-set noise. We theoretically analyze and compare the impacts of open-set and closed-set noise, as well as the differences between various open-set noise modes. Additionally, we examine a common open-set noise detection mechanism based on prediction entropy. To empirically validate our theoretical insights, we construct two open-set noisy datasets—CIFAR100-O and ImageNet-O—and introduce a novel open-set test set for the widely used real-world noisy dataset, WebVision. Our findings indicate that open-set noise exhibits distinct qualitative and quantitative characteristics, underscoring the need for further exploration into how models can be fairly and comprehensively evaluated under such conditions.
[ "Open-set noise", "Noisy labels" ]
Reject
https://openreview.net/pdf?id=EzB0n8aRqI
https://openreview.net/forum?id=EzB0n8aRqI
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tk2gThEuzv", "rgCW3sHAqj", "pu7gYU4HwL", "kYXFhYhO0b", "jmgC8vtt7D", "haOg2Zb9J6", "hAeNmZC41C", "eZGevYbFX6", "dJZwft2TTt", "W4GfoUoSAX", "Usa64OmBeb", "Suknc2ul3H", "RRLPfRXZRp", "QKMrBcTpHy", "MZul0IEzgx", "Gw9HCPMrOc", "6p6AR9HzCv", "6IRFFDVydL", "4xp9kcvI0N", "2eziCbJi4N" ], "note_type": [ "official_review", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730022597336, 1732972960853, 1734406390138, 1730547914389, 1732398671082, 1732663864614, 1730707614651, 1732398810512, 1732808861913, 1732398844729, 1732633646809, 1732398421249, 1737523658403, 1733400317013, 1732398478382, 1732398624926, 1732398536547, 1732398866924, 1732972503032, 1732399030189 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4732/Reviewer_rg5p" ], [ "ICLR.cc/2025/Conference/Submission4732/Authors" ], [ "ICLR.cc/2025/Conference/Submission4732/Area_Chair_z4MB" ], [ "ICLR.cc/2025/Conference/Submission4732/Reviewer_FqR8" ], [ "ICLR.cc/2025/Conference/Submission4732/Authors" ], [ "ICLR.cc/2025/Conference/Submission4732/Authors" ], [ "ICLR.cc/2025/Conference/Submission4732/Reviewer_op1H" ], [ "ICLR.cc/2025/Conference/Submission4732/Authors" ], [ "ICLR.cc/2025/Conference/Submission4732/Authors" ], [ "ICLR.cc/2025/Conference/Submission4732/Authors" ], [ "ICLR.cc/2025/Conference/Submission4732/Reviewer_op1H" ], [ "ICLR.cc/2025/Conference/Submission4732/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "~Jiawei_Ge2" ], [ "ICLR.cc/2025/Conference/Submission4732/Authors" ], [ "ICLR.cc/2025/Conference/Submission4732/Authors" ], [ "ICLR.cc/2025/Conference/Submission4732/Authors" ], [ "ICLR.cc/2025/Conference/Submission4732/Authors" ], [ "ICLR.cc/2025/Conference/Submission4732/Authors" ], [ "ICLR.cc/2025/Conference/Submission4732/Authors" ] ], "structured_content_str": [ "{\"summary\": \"In this paper, the authors theoretically analyze and compare the impacts of open-set and closed-set noise, as well as the differences between various open-set noise modes. Additionally, they examine a common open-set noise detection mechanism based on\\nprediction entropy. Moreover, to validate their insights, they construct two open-set noisy datasets and a open-set test-set for evaluation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.The paper is easy to follow and the conclusions are easy to understand.\\n\\n2.The mathematical analysis is adequate and there are experimental results to support these insights.\\n\\n3.The experimental figures are clear and adequate on the constructed dataset.\", \"weaknesses\": \"1.The way of constructing benchmark is quite similar to existing benchmarks in LNL. Thus it may be not suitable to list it as a contribution.\\n\\n2.Though the author introduces a hard open-set noise (seems like a combination of feature-dependent noise and open-set noise), but it seems that the author does not design a method to tackle such kind of noise.\\n\\n3.Some of the conclusions may be naive and simple. For example, \\\" it may be effective only for \\u2018easy\\u2019 open-set noise.\\\". Because entropy-based methods generally fail to detect close-set feature-dependent noise as well.\", \"there_are_some_minor_issues_that_will_not_affect_the_rating\": \"1.\\\" the concept of complete noise transition matrix\\\" should be \\\" the concept of a complete noise transition matrix\\\".\\n\\n2. \\\"namely fitted case and overfitted case\\\" should be \\\"namely the fitted case and overfitted case\\\".\\n\\n3.It would be better if the contributions are more compact. There are six contributions now.\", \"questions\": \"What does this sentence mean: obtaining a model that perfectly fits the data distribution is often challenging; here, we consider training a single-layer linear classifier upon a frozen pretrained encoder. I think a single-layer linear classifier may even be worse to fit the data distribution.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer op1H,\\n\\nWe sincerely apologize for reaching out to you again, and we hope this does not disrupt your Thanksgiving holiday if you celebrate it. **We deeply appreciate your acknowledgment of our rebuttal and your support for the acceptance of our paper. We sincerely wonder if you might consider increasing the rating and what further efforts we could make in this regard\\u2014your further support is crucial for this borderline submission.**\\n\\nThank you once again for your time and support. We greatly appreciate your detailed review and valuable feedback on our submission. Your insights have been instrumental in helping us refine and improve the quality of our work. We are truly grateful for your assistance and look forward to your reply.\\n\\nBest regards,\\n\\nAuthors of Submission 4732\"}", "{\"metareview\": \"This submission received the ratings of three reviewers, which recommended 6, 3 and 5, averaging 4.67. Given the plenty of competitive submissions in ICLR, this stands at a score below the borderline. The AC has noticed that a reviewer who rated 3 provided a very short review, which has been ignored in evaluating the overall contribution. One reviewer rating 5 mainly concerned about the contribution given the progress of the whole area and previous similar constructions, including the potential solution, which is well addressed based on the author rebuttal. To some extent, I agree with the reviewer rating 6's comment about presenting an observation rather than introducing a novel method, however, given the competitive submission, I also feel that there is still some effort that can be done to improve the submission, like some potential solutions to be explored along with the analysis and observation, which will make the submission outstanding in the peer submissions. Currently, I regret to tend to recommend rejection considering the overall review of two constructive reviewers, and hope the comments help the improvement of the submission.\\n\\nThe AC.\", \"additional_comments_on_reviewer_discussion\": \"1. Only presenting the observation and analysis mentioned by one constructive reviewer.\\n\\nNot well addressed.\"}", "{\"summary\": \"This paper analyzes the learning with noisy labels problem by explicitly accounting for the presence of open-set noise. However, only theoretical analysis and dataset construction are not sufficient to be published in ICLR.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper analyzes the learning with noisy labels problem by explicitly accounting for the presence of open-set noise.\", \"weaknesses\": \"Only theoretical analysis and dataset construction are not sufficient to be published in ICLR.\", \"questions\": \"No\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer op1H (Part 4/4)\", \"comment\": \"**2. Overfitted case**:\\n\\nwe then re-investigate the overfitted-case. Following the *error rate inflation* induced for overfitted case (L264-L267) in the main paper, we have:\\n- Easy open-set noise with $x_1$:\\n$\\\\Delta E_{x_1} = \\\\max[p^1_1, ...,p^1_A] - \\\\sum_{i=1}^A (p^1_i\\\\cdot \\\\sum_{j=1}^{A+B}p^1_j T^1_{ji}) \\\\\\\\\\n= \\\\max[p^1_1, ...,p^1_A] - \\\\sum_{i=1}^A \\\\big(p^1_i\\\\cdot (\\\\sum_{j=1}^{A}p^1_j T^1_{ji} + \\\\sum_{j=A+1}^{A+B}p^1_j T^1_{ji})\\\\big)\\\\\\\\\\n \\\\xrightarrow{T^1_{in}\\\\neq \\\\mathbf{I}, \\\\ T^1_{out} = T^{easy}} \\\\\\\\\\n= \\\\max[p^1_1, ...,p^1_A] - \\\\sum_{i=1}^A p^1_i (\\\\sum_{j=1}^{A}p^1_j T^1_{ji}+\\\\frac{1}{A}\\\\sum_{i=A+1}^{A+B} p^1_i).$\\n\\t \\n- Hard open-set noise with $x_2$:\\n$\\\\Delta E_{x_2} = \\\\max[p^2_1, ...,p^2_A] - \\\\sum_{i=1}^A (p^2_i\\\\cdot \\\\sum_{j=1}^{A+B}p^2_j T^2_{ji})\\\\\\\\\\n= \\\\max[p^2_1, ...,p^2_A] - \\\\sum_{i=1}^A \\\\big(p^2_i\\\\cdot (\\\\sum_{j=1}^{A}p^2_j T^2_{ji} + \\\\sum_{j=A+1}^{A+B}p^2_j T^2_{ji})\\\\big)\\\\\\\\\\n \\\\xrightarrow{T^2_{in}\\\\neq \\\\mathbf{I}, \\\\ T^2_{out} = T^{hard}} \\\\\\\\\\n= \\\\max[p^2_1, ...,p^2_A] - \\\\sum_{i=1}^A p^2_i(\\\\sum_{j=1}^{A}p^2_j T^2_{ji}+\\\\sum_{j\\\\in H_i}p^2_j)$\\n\\nSimilarly, we have:\\n\\n$\\\\Delta E_{x_1} - \\\\Delta E_{x_2} = \\\\sum_{i=1}^A p^2_i(\\\\sum_{j=1}^{A}p^2_j T^2_{ji}+\\\\sum_{j\\\\in H_i}p^2_j)-\\\\sum_{i=1}^A p^1_i (\\\\sum_{j=1}^{A}p^1_j T^1_{ji}+\\\\frac{1}{A}\\\\sum_{i=A+1}^{A+B} p^1_i)\\n=\\\\sum_{i=1}^A p^1_i (\\\\sum_{j\\\\in H_i}p^1_j - \\\\frac{1}{A}\\\\sum_{i=A+1}^{A+B} p^1_i)$ \\n\\nWe note that the result aligns with L1052\\u2013L1056 in Appendix D.2. **Therefore, the presence of additional closed-set noise does not affect the conclusion in the overfitted case.**\\n\\n*We hope the above can further resolve the your concerns about. We have also updated the manuscript accordingly with above analysis in `Appendix D.3`.*\\n \\n> **W3: Experiments on more methods: *'Experiments on the main paper \\u2026 add more LNL methods and discusses how different methods behave and align with the theorems in the main paper.'***\\n\\nMany thanks for the suggestion. We kindly refer you to our reply to W1.\\n\\n> ***Q1: 'For Figure 2 (c),(d), Authors state in the paper that \\\"the presence of open-set noise degrades OOD detection performance, whereas, conversely, the presence of closed-set noise could even improve OOD detection performance.\\\". However, from my observation, the closed-set noise does not improve detection performance from the Figure.'***\\n\\nWe apologize for the misleading statement. Our intention was to convey that the results presented in Figures 2(c/d) demonstrate that, in the presence of open-set noise, the OOD detection performance is even better than the Clean Baseline (indicated by the dotted line in the figure). To better illustrate this intention, we have updated the manuscript in L483 - L487 and added the following clarification:\\n\\n*`For example, we notice that in the fitted case, the existence of open-set noise leads to steady improvement in OOD detection performance for both CIFAR100-O and ImageNet-O datasets, across different noise ratios.`*\\n\\n> ***Q2: 'In Section 3.3.2, Is it possible to assume the same pattern and noise ratio of close-set noise and then study how different open-set noise compare to each other?'***\\n\\nMany thanks for the question. We kindly refer you to our reply to W2.\\n\\n> ***Q3: 'For Figure 2, is it drawn when model converges for each case?'***\\n\\nMany thanks for the question. We apologize for a typo in the subcaptions of Figure 2 - the 'memorized case' in the old figure indeed refers to the *overfitted case*, where the model (PreResNet18) is trained for a sufficiently long time and converges. For the *fitted case*, we train a linear classifier on top of a pretrained frozen encoder, also, for a sufficiently long time. We believe this setup limits the model capacity while ensuring that the dataset is already well-represented, allowing us to approximate the *fitted case*.\\n\\n**Thank you once again for your time and valuable feedback, which have helped us improve our work. If there are any additional questions or clarifications needed, please do not hesitate to let us know. We would also kindly ask if you might consider revising your score, should our responses satisfactorily resolve the points you raised. We truly appreciate your consideration and look forward to your further comments.**\"}", "{\"title\": \"Thanks to reviewer op1H for the recognition\", \"comment\": \"Dear Reviewer op1H,\\n \\n\\nThank you for your kind response and recognition of our method. **In the context of the increasingly zero-sum dynamics of the ICLR community, your acknowledgement is particularly meaningful to us.** Your suggestion for robust loss functions and your insights on theoretical aspects are of great value in improving the quality of our paper. \\n\\n**We approach you with humility and sincerity to seek your guidance on whether there is any possibility for further adjustments that could earn your greater support.** We are more than willing to make further improvements and optimizations to the manuscript based on your suggestions to maximize its academic quality. \\n\\nAdditionally, we would like to briefly mention that we have also explored **two potential methods for addressing open-set noise**, which are detailed in our responses to reviewer *rg5p* and in `Appendix G` of the updated manuscript. We hope this additional effort aligns with your expectations and earns your support.\\n\\nWe would like to express our sincere gratitude for your continuous support and valuable feedback. We sincerely hope that this message will not cause you any inconvenience and look forward to your reply. \\n\\n\\nWith our deepest respect,\\n\\nAuthors of submission 4732\"}", "{\"summary\": \"This paper addresses the challenge of open-set noise in learning with noisy labels (LNL), a problem where the true labels of noisy samples may not belong to any known category. The authors refine the LNL problem by accounting for open-set noise and theoretically analyze its impact compared to closed-set noise. They construct two open-set noisy datasets and introduce a novel open-set test set for the WebVision dataset to empirically validate their findings. The results indicate that open-set noise has distinct characteristics and a lesser negative impact on model performance compared to closed-set noise, highlighting the need for further exploration into model evaluation under such conditions. The paper also examines an entropy-based open-set noise detection mechanism and proposes additional out-of-distribution detection tasks for model evaluation.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper studies a detailed examination of open-set noise in the context of learning with noisy labels, an area that has been largely overlooked in previous research.\", \"It offers a robust theoretical framework to analyze the effects of open-set noise and supports these findings with empirical evidence through the creation and testing on synthetic datasets.\", \"The paper provides a careful look at different modes of open-set noise, comparing 'easy' and 'hard' open-set noise scenarios, which is crucial for understanding how various types of noise affect model performance.\", \"It introduces the use of out-of-distribution (OOD) detection as a complementary evaluation metric to traditional accuracy measures, enhancing the assessment of model performance in the presence of open-set noise.\", \"The paper underscores the significance of open-set noise in real-world datasets and demonstrates the practical impact of its findings on existing learning methods, highlighting the need for more research in this area.\"], \"weaknesses\": [\"The paper examines a few existing learning with noisy labels (LNL) methods on the synthetic datasets, outlined in Appendix E.1. However, it does not explore a wide range of existing methods such as robust losses in LNL, which could limit the comprehensiveness of the comparison and the conclusions drawn about the state-of-the-art in handling open-set noise.\", \"For Section 3.3.2, Authors exclude the effect of closed-set noise (Cx = 0) and only focus on open-set noise which could limits the findings in real-world scenarios. For example, It is not clear how different open-set noise with same close-set label noise.\", \"Experiments on the main paper are not thorough enough. I suggest add more LNL methods and discusses how different methods behave and align with the theorems in the main paper.\"], \"questions\": [\"For Figure 2 (c) (d), Authors state in the paper that \\\"the presence of open-set noise degrades OOD detection performance, whereas, conversely, the presence of closed-set noise could even improve OOD detection performance.\\\". However, from my observation, the closed-set noise does not improve detection performance from the Figure.\", \"In Section 3.3.2, Is it possible to assume the same pattern and noise ratio of close-set noise and then study how different open-set noise compare to each other?\", \"For Figure 2, is it drawn when model converges for each case?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer rg5p (Part 1/3)\", \"comment\": \"Thanks very much for the careful review. We sincerely appreciate your time and effort in reading our paper, as well as the insightful and constructive feedback. To fully address your questions, we have conducted additional experiments and analyses. We kindly ask for your understanding regarding the slightly lengthy response and sincerely look forward to engaging in further discussions with you.\\n\\n> **W1: Benchmark construction: *'The way of constructing benchmark is quite similar to existing benchmarks in LNL.'***\\n\\nMany thanks for the comment. Regarding *'the way of constructing benchmark'*, we would like to reiterate the content from our paper to emphasize once again how our approach to dataset construction differs from existing methods:\\n \\n*`Previous works involving open-set noise also attempted to build synthetic noisy datasets, typically treating different datasets as open-set noise for each other to construct synthetic noisy dataset (Sachdeva et al., 2021; Wu et al., 2021). In this scenario, potential domain gaps could affect a focused analysis of open-set noise. In this work, we propose selecting inlier/outlier classes from the same dataset to avoid this issue. Besides, in previous works, the consideration of open-set noise modes often focused on random flipping from outlier classes to all possible inlier classes, which is indeed the \\u2018easy\\u2019 open-set noise adopted in this paper.`*\", \"we_emphasize_the_following_key_points\": \"1. **Avoiding domain gaps**: Our construction method selects inlier and outlier classes from the same dataset, minimizing the implicit effects of domain gaps in previous works.\\n2. **Addressing both 'easy' and 'hard' open-set noise**: Unlike prior benchmarks, we consider multiple open-set noise modes, including the challenging 'hard' open-set noise.\\n\\n**We understand that the reviewer might prefer a newly collected, large-scale open-set noisy dataset. Regrettably, this was not the goal of our work. We consider such a contribution to be more suitable as a dataset & benchmark track paper, and the budget required to collect such a dataset is beyond our current resources.** Nonetheless, we made an extra effort to provide an open-set test dataset based on the real-world noisy dataset, WebVision. We hope this additional contribution addresses the reviewer\\u2019s concerns and demonstrates our commitment to advancing the study of open-set noise in a meaningful way.\\n\\n> **W2: Method for learning with hard open-set noise: *'... it seems that the author does not design a method to tackle such kind of noise.'***\\n\\nMany thanks for the question. While we would like to reiterate that our aim in this work is not to propose a new empirical solution, we are happy to provide some potential ideas. Based on the theoretical analysis presented in our paper, we observe that existing methods, such as entropy-based detection mechanisms, may struggle to handle 'hard' open-set noise\\u2014this type of noise primarily arises from semantic similarities between open-set noise and closed-set categories. **Below, we explore two different methods and present the results of preliminary experiments.**\\n\\n**1. Entropy-based open-set noise detection with trained encoder**\\n\\nWe first investigate whether pretrained encoders can assist in identifying open-set noise. Compared to randomly initialized feature spaces, we expect that pretrained encoders, with their better-organized representations, may more effectively distinguish challenging open-set samples. Specifically, we observe the entropy dynamics of open-set noise and clean samples after replacing the randomly initialized encoder in the main paper with a pretrained encoder.\\n\\nWe first consider **self-supervised pretraining**. Specifically, we apply the MoCo framework [1] to pretrain the encoder for 500 epochs. Below, we show the entropy dynamics at different warmup training epochs with pretrained encoder:\\n\\n*Figure R1. Entropy dynamics with Self-supervised pretrained encoder* [(Clickable anonymous link)](https://anonymous.4open.science/r/ICLR25-submission4732/SelfSupPretrained_entropydynamic.pdf)\\n\\nWe also consider to utilize the **pretrained vision encoder of CLIP model** [2]. \\n\\n*Figure R2. Entropy dynamics with CLIP encoder (VIT-B/32)* [(Clickable anonymous link)](https://anonymous.4open.science/r/ICLR25-submission4732/CLIPencoder_entropydynamic.pdf)\\n\\n\\n**Unfortunately, by comparing Figures R1/2 above with Figure 3 in the paper, we observe that neither of the two pretrained encoders results in noticeable improvements. The entropy-based open-set noise detection mechanisms remain effective only for 'easy' open-set noise and continue to show insensitivity to 'hard' open-set noise.**\"}", "{\"title\": \"Invitation for further discussion\", \"comment\": \"Dear Reviewer rg5p,\\n\\nThank you very much for your detailed review and valuable feedback on our submission. **Your insights are crucial for us to optimize and improve the quality of our work.** We have carefully considered your suggestions and have made several revisions to the manuscript.\\n\\n**In particular, based on the theoretical insights provided in the paper, we have considered two additional methods to address the challenges of 'hard' open-set noise and conducted preliminary experimental explorations.** We have found that leveraging vision-language models to address open-set noise holds great potential, and we sincerely hope these explorations align with your expectations.\\n\\nAdditionally, based on the suggestions from *Reviewer op1H*, **we have also examined the performance of robust loss functions in handling open-set noise**. We sincerely hope this addition can further earn your support.\\n\\n**If you have any additional suggestions or comments, we would greatly appreciate the opportunity for us to provide further clarifications before your final decision.** We believe that your guidance would significantly enhance the quality of our manuscript.\\n\\nThank you once again for your time and support. We truly appreciate your help and look forward to your response.\\n\\nBest regards,\\n\\nAuthors of Submission 4732\"}", "{\"title\": \"Response to Reviewer rg5p (Part 2/3)\", \"comment\": \"**2. Zeroshot open-set noise detection with CLIP**\\n\\nDue to its multi-modality nature, we further try to utilize CLIP for zero-shot open-set noise detection. Specifically, we design a simple algorithm to compute an intuitive indicator value for identifying open-set noise. For each sample $x$ with annotated label $y$,\\n1. Generate Text Prompts:\\n - For the target class $y$, we create a text prmopt: \\\"A photo of class {$y$}.\\\".\\n - For non-target classes, we consider a set of prompts: [\\\"A photo of class {$i$}.\\\" for $i$ $\\\\in$ $L_y$]. Here, we denote as $L_y$ the possible source classes to which the sample $x$ may belong. Practically, $L_y$ can be a broad set of classes, such as the 1K classes from ImageNet-1K dataset, or it can be manully defined to include semanticlly-challenging classes; for example, ['tiger', 'cheetah'] for class 'cat'. In below experiments, we default to the first option.\\n\\n2. Calculate Similarities:\\n - Similarity to the target class:\\n $S_y = \\\\text{sim}(v_x, t_y)$. Here, $v_x$ and $t_y$ denotes the visual and textual representation, respetively.\\n - Maximum similarities to non-target classes:\\n $S_{\\\\text{other}} = \\\\max \\\\{ \\\\text{sim}(v_x, t_{i}) \\\\mid i \\\\in L_y \\\\}$.\\n\\n3. Compute the Difference: $D_x = S_y - S_{\\\\text{other}}$.\\n\\nIntuitively, we measure and compare the similarity of the visual semantics of sample $x$ to its annotated text label and the most likely labels from the source classes. To illustrate the effectiveness of $D_x$ as an open-set noise indicator, we plot the distribution of $D_x$ for different samples below\\uff1a\\n\\n*Figure R3. Zeroshot open-set noise detection with CLIP* [(Clickable anonymous link)](https://anonymous.4open.science/r/ICLR25-submission4732/CLIP_zeroshot_selection.pdf)\\n\\n**We notice that, compared to the entropy-based open-set detection mechanism, the zero-shot open-set identification brings steady improvements.**\\n\\nPlease note that these are preliminary experiments and initial attempts\\u2014we plan to explore this more deeply in our future work. We have also updated the manuscript `Appendix G` with the new results mentioned above. We hope this new analysis helps address your concerns.\\n\\n*[1] He, Kaiming, et al. \\\"Momentum contrast for unsupervised visual representation learning.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.*\\n\\n*[2] Radford, Alec, et al. \\\"Learning transferable visual models from natural language supervision.\\\" International conference on machine learning. PMLR, 2021.*\\n\\n> **W3: Significance of conclusions: *'Some of the conclusions may be naive and simple. For example, \\\" it may be effective only for \\u2018easy\\u2019 open-set noise.\\\". Because entropy-based methods generally fail to detect close-set feature-dependent noise as well.'***\\n\\nMany thanks for the comment. **We fully understand the insights from closed-set noise to open-set noise\\u2014indeed, this was one of the baseline assumptions we considered prior to conducting our analysis and experiments. Nevertheless, we believe that rigorous theoretical analysis and experimental validation are necessary.** Through these efforts, we not only confirmed the validity of the assumption but also revealed the limitations of existing methods in addressing challenging 'hard' open-set noise.\\n\\nTo summarize, we hope our rigorous analysis and clear conclusions will inspire further research on open-set noise, moving beyond the traditional focus on closed-set noise and entropy-based methods. **Additionally, based on **Reviewer op1H**'s suggestion, we have included additional analysis in the updated appendix, including the performance check of robust loss functions in dealing with different open-set noise, to further clarify and support our findings.** We sincerely hope these supplements can further address your concerns.\\n\\n \\n> **Q1: Linear classifier for fitted case: *'What does this sentence mean: obtaining a model that perfectly fits the data distribution is often challenging; here, we consider training a single-layer linear classifier upon a frozen pretrained encoder. I think a single-layer linear classifier may even be worse to fit the data distribution.'***\\n\\nMany thanks for the question. We are happy to further clarify: by 'perfectly fits the data distribution', we mean the model has perfectly fit the sampled distribution, i.e., the *fitted case*. \\n\\nIt is known that there is always a trade-off between underfitting and overfitting with respect to model capacity. A single-layer linear layer has limited capacity, thus it is harder to memorize all the labels, i.e., overfitting to the training set. However, as you mentioned, a single-layer linear classifier may also struggle to fully fit the data distribution due to its limited capacity. Thus, we propose to start with a frozen pretrained encoder - which provides a well-learned representation feature space. This approach reduces the burden on the linear classifier and allows it to leverage the pretrained features for classification.\"}", "{\"comment\": \"I sincerely thank the authors for their detailed and thoughtful responses, especially for providing new experimental results and theoretical analyses to address my concerns. I understand the challenges of delivering these updates within the short rebuttal period, and I greatly appreciate their effort. Most of my concerns have now been satisfactorily addressed.\\n\\nOverall, while this paper leans more towards presenting an observation rather than introducing a novel method, I believe its findings offer valuable insights that could advance our understanding of open-set label noise in the learning with noisy labels (LNL) research domain. Therefore, I maintain my score, which leans towards acceptance.\"}", "{\"title\": \"General response and summary of updated manuscript\", \"comment\": \"We sincerely thank the reviewers and the area chair for their time, effort, and constructive feedback on our manuscript. We appreciate the reviewer's recognition of the insights our work brings to the LNL community, especially in the study of open-set noise. **We carefully reviewed and addressed all the comments and have made corresponding revisions to address the concerns raised. The key updates to our manuscript include**:\\n\\n- Minor Revisions in Main Paper:\\n 1. Fixed typos and grammatical errors.\\n 2. Condensed the contributions section.\\n 3. Updated the presentation in the experiments `Section 4.1` for improved clarity and consistency.\\n- Additional Analysis in the Appendix:\\n 1. Expanded experiments on the performance of robust loss functions under open-set noise in `Appendix F`.\\n 2. Added comparisons of 'hard' vs. 'easy' open-set noise under scenarios with additional same and fixed closed-set noise in `Appendix D.3`.\\n 3. Conducted preliminary explorations on identifying and learning effectively with 'hard' open-set noise in `Appendix G`.\\n\\nWe hope these updates address the reviewers' concerns and strengthen the overall quality of the paper. We are grateful for the opportunity to further refine our work and look forward to additional feedback.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Enquiry for further insights\", \"comment\": \"Excellent work! It\\u2019s a really interesting and attractive idea to combine the OOD detection task with the original classification task. However, I\\u2019m wondering if the authors could provide further insights into the relationship between classification accuracy and OOD detection performance. For example, could optimizing one objective negatively impact the other?\"}", "{\"title\": \"Response to Reviewer op1H (Part 1/4)\", \"comment\": \"Thanks very much for the careful review. We sincerely appreciate your time and effort in reading our paper, as well as the insightful and positive feedback. To fully address your questions, we have conducted additional experiments and analyses. We kindly ask for your understanding regarding the slightly lengthy response and sincerely look forward to engaging in further discussions with you.\\n\\n> **W1: Results on more LNL methods: *\\u2018The paper examines a few existing learning with noisy labels (LNL) methods \\u2026 the state-of-the-art in handling open-set noise.'***\\n\\nMany thanks for the suggestion. We mainly experimented with sample selection methods before, as these methods often hold the state-of-the-art performances on current benchmarks. We are happy to include more results of methods based on robust loss functions. Specifically, we considered some widely used robust loss functions, including the Generalized Cross Entropy (GCE) loss function [1] and the Symmetric Cross Entropy (SCE) loss function [2]. We report below the experimental results (Classification accuracy and OOD detection AUC score) on the CIFAR100-O and ImageNet-O datasets after replacing the standard cross-entropy loss with two different robust loss functions.\\n\\n| Noise mode | Easy | | | | Hard | | | |\\n|-----------------|---------|---------|---------|---------|---------|---------|---------|---------|\\n| **Noise ratio** | **0.1** | **0.2** | **0.3** | **0.4** | **0.1** | **0.2** | **0.3** | **0.4** |\\n| CE | 0.846 | 0.804 | 0.770 | 0.714 | **0.872** | 0.847 | **0.842** | **0.829** |\\n| GCE [1] | **0.854** | 0.810 | 0.763 | 0.708 | 0.864 | 0.840 | 0.813 | 0.800 |\\n| SCE [2] | 0.846 | **0.822** | **0.787** | **0.729** | 0.871 | **0.854** | 0.840 | 0.814 |\\n\\n*Table 1. Classification accuracy with robust loss functions on CIFAR100-O dataset.*\\n\\n\\n| Noise mode | Easy | | | | Hard | | | |\\n|-----------------|---------|---------|---------|---------|---------|---------|---------|---------|\\n| **Noise ratio** | **0.1** | **0.2** | **0.3** | **0.4** | **0.1** | **0.2** | **0.3** | **0.4** |\\n| CE | **0.804** | 0.793 | 0.773 | 0.754 | **0.770** | **0.728** | **0.692** | **0.664** |\\n| GCE [1] | 0.782 | 0.771 | 0.752 | 0.719 | 0.759 | 0.718 | 0.679 | 0.639 |\\n| SCE [2] | 0.794 | **0.799** | **0.784** | **0.756** | 0.749 | 0.718 | 0.682 | 0.651 |\\n\\n*Table 2. OOD detection AUC score with robust loss functions on CIFAR100-O dataset.* \\n\\n| Noise mode | Easy | | | | Hard | | | |\\n|-----------------|---------|---------|---------|---------|---------|---------|---------|---------|\\n| **Noise ratio** | **0.1** | **0.2** | **0.3** | **0.4** | **0.1** | **0.2** | **0.3** | **0.4** |\\n| CE | 0.822 | 0.783 | 0.752 | **0.721** | **0.859** | 0.838 | **0.834** | 0.821 |\\n| GCE [1] | 0.813 | 0.788 | 0.739 | 0.714 | 0.853 | 0.833 | 0.818 | **0.834** |\\n| SCE [2] | **0.826** | **0.797** | **0.759** | 0.720 | 0.841 | **0.839** | 0.831 | 0.827 |\\n\\n*Table 3. Classification accuracy with robust loss functions on ImageNet-O dataset.*\\n\\n\\n| Noise mode | Easy | | | | Hard | | | |\\n|-----------------|---------|---------|---------|---------|---------|---------|---------|---------|\\n| **Noise ratio** | **0.1** | **0.2** | **0.3** | **0.4** | **0.1** | **0.2** | **0.3** | **0.4** |\\n| CE | **0.769** | 0.760 | 0.764 | 0.739 | **0.658** | **0.601** | **0.569** | **0.549** |\\n| GCE [1] | 0.732 | 0.740 | 0.729 | 0.719 | 0.636 | 0.591 | 0.555 | 0.513 |\\n| SCE [2] | 0.749 | **0.768** | **0.765** | **0.748** | 0.633 | 0.599 | 0.558 | 0.537 |\\n\\n*Table 4. OOD detection AUC score with robust loss functions on ImageNet-O dataset.* \\n\\n\\nWe highlight the methods that achieve the best performance under different settings in bold. Overall, we observe the following:\\n- Compared to the original CE loss, the GCE loss function generally results in lower classification accuracy and OOD detection AUC scores.\\n- The SCE loss function appears to improve the classification and OOD detection performance in the presence of 'Easy' open-set noise. However, it seems to degrade performance when dealing with 'Hard' open-set noise.\"}", "{\"title\": \"Response to Reviewer op1H (Part 3/4)\", \"comment\": \"> **W2: Comparison between different open-set noise with same pattern and noise ratio of close-set noise: *\\u2018For Section 3.3.2, \\u2026 how different open-set noise with same close-set label noise.\\u2019***\\n\\nMany thanks for the insightful question. Previously, we analyzed open-set noise by setting the closed-set noise to zero to simplify the analysis. We are pleased to confirm that our analysis can be extended to conditions with additional *'same pattern and noise ratio of closed-set noise'*. Specifically, compared to the analysis in our paper, we no longer assume $T_{in} \\\\neq \\\\mathbf{I}$. Note that, similar to the main paper, we consider two proxy sample points $x_1$ and $x_2$ corresponding two different open-set noise modes. We also assume $[p^1_1, \\\\dots,p^1_A] = [p^2_1, \\\\dots,p^2_A]$ to focus solely on the impact of the noise modes. \\n\\n**1. Fitted case**: \\n\\nwe first investigate the fitted case. Following the *error rate inflation* induced for fitted case (L260-L262) in the main paper, we have:\\n- Easy open-set noise - $x_1$:\\n$\\\\Delta E_{x_1} = \\\\max[p^1_1, \\\\dots,p^1_A] - p_{\\\\arg\\\\max [\\\\sum_{i=1}^{A+B}p^1_i T^1_{i1}, \\\\dots , \\\\sum_{i=1}^{A+B}p^1_i T^1_{iA}]} = \\\\max[p^1_1, \\\\dots,p^1_A] - p_{\\\\arg\\\\max [\\\\sum_{i=1}^{A} p^1_iT_{i1}+\\\\frac{1}{A}\\\\sum_{i=A+1}^{A+B} p^1_i, \\\\dots, \\\\sum_{i=1}^{A} p^1_iT_{iA}+\\\\frac{1}{A}\\\\sum_{i=A+1}^{A+B} p^1_i]} = \\\\max[p^1_1, \\\\dots,p^1_A] - p_{\\\\arg\\\\max [\\\\sum_{i=1}^{A} p^1_iT_{i1}, \\\\dots, \\\\sum_{i=1}^{A} p^1_iT_{iA}]}$ \\n \\n- Hard open-set noise - $x_2$:\\n$\\\\Delta E_{x_2} =\\\\max[p^2_1, ...,p^2_A] - p_{\\\\arg\\\\max [\\\\sum_{i=1}^{A+B}p^2_i T^2_{i1}, ..., \\\\sum_{i=1}^{A+B}p^2_i T^2_{iA}]} =\\\\max[p^2_1, ...,p^2_A] - p_{\\\\arg\\\\max[\\\\sum_{i=1}^{A} p^2_iT_{i1}+\\\\sum_{b\\\\in H_1}p^2_b,...,\\\\sum_{i=1}^{A} p^2_iT_{iA}+\\\\sum_{b\\\\in H_A}p^2_b]}$.\\n\\n Unfortunately, without extra assumptions on $T_{in}$ or $[p^1_1, \\\\dots,p^1_A]$, to compare $\\\\Delta E_{x_1}$ and $\\\\Delta E_{x_2}$ is impossible. Here, we consider two conservative but realistic cases:\\n\\n**i. Concentration assumption of $[p^1_1, \\\\dots,p^1_A]$**: in this case, we assume the probability $[p^1_1, \\\\dots,p^1_A]$ concentrate on one specific class, say, $t$. We thus have $p^1_t \\\\rightarrow 1, p^1_k \\\\rightarrow 0, \\\\forall k \\\\neq t$. In this case, we have:\\n- Easy open-set noise - $x_1$:\\n$\\\\Delta E_{x_1} = \\\\max[p^1_1, \\\\dots,p^1_A] - p_{\\\\arg\\\\max [\\\\sum_{i=1}^{A} p^1_iT_{i1}, \\\\dots, \\\\sum_{i=1}^{A} p^1_iT_{iA}]} \\\\\\\\\\n\\\\approx p^1_t - p_{\\\\arg\\\\max [p^1_tT_{t1}, \\\\dots, p^1_tT_{tt}, \\\\dots,p^1_tT_{tA}]}\\\\\\\\\\n \\\\xrightarrow{\\\\text{diagnomal-dominant noise transition matrix}} \\\\\\\\\\n=0.$ \\n\\nNote we normally implciitly assume a *diagnomal-dominant noise transition matrix*, that is, $\\\\forall i, j\\\\neq i, T_{ii} > T_{ij}$. \\n\\n- Hard open-set noise - $x_2$:\\n$\\\\Delta E_{x_2} =\\\\max[p^2_1, ...,p^2_A] - p_{\\\\arg\\\\max[\\\\sum_{i=1}^{A} p^2_iT_{i1}+\\\\sum_{b\\\\in H_1}p^2_b,...,\\\\sum_{i=1}^{A} p^2_iT_{iA}+\\\\sum_{b\\\\in H_A}p^2_b]}\\\\\\\\\\n\\\\approx p^2_t - p_{\\\\arg\\\\max [p^2_tT_{t1}+\\\\sum_{b\\\\in H_1}p^2_b, \\\\dots, p^2_tT_{tt}+\\\\sum_{b\\\\in H_t}p^2_b, \\\\dots,p^2_tT_{tA}+\\\\sum_{b\\\\in H_A}p^2_b]}\\\\\\\\\\n\\\\geq 0.$\\n\\n\\n\\n**ii. Symmetric closed-set noise for $T_{in}$**: in this case, we assume a symmetric noise transition matrix $T$.\\n\\n- Easy open-set noise with $x_1$:\\n$\\\\Delta E_{x_1} = \\\\max[p^1_1, \\\\dots,p^1_A] - p_{\\\\arg\\\\max [\\\\sum_{i=1}^{A} p^1_iT_{i1}, \\\\dots, \\\\sum_{i=1}^{A} p^1_iT_{iA}]} \\\\\\\\\\n= \\\\max[p^1_1, \\\\dots,p^1_A] - p_{\\\\arg\\\\max [\\\\sigma+ p^1_1T_{\\\\Delta}, \\\\dots, \\\\sigma+ p^1_AT_{\\\\Delta}]}\\\\\\\\\\n=0.$ \\n \\n- Hard open-set noise with $x_2$:\\n$\\\\Delta E_{x_2} =\\\\max[p^2_1, ...,p^2_A] - p_{\\\\arg\\\\max [\\\\sum_{i=1}^{A+B}p^2_i T^2_{i1}, ..., \\\\sum_{i=1}^{A+B}p^2_i T^2_{iA}]} \\\\\\\\\\n=\\\\max[p^2_1, ...,p^2_A] - p_{\\\\arg\\\\max[\\\\sum_{i=1}^{A} p^2_iT_{i1}+\\\\sum_{b\\\\in H_1}p^2_b,...,\\\\sum_{i=1}^{A} p^2_iT_{iA}+\\\\sum_{b\\\\in H_A}p^2_b]}\\\\\\\\\\n= p^2_t - p_{\\\\arg\\\\max [\\\\sigma+ p^2_1T_{\\\\Delta}+\\\\sum_{b\\\\in H_1}p^2_b,\\\\dots,\\\\sigma+ p^2_AT_{\\\\Delta}+\\\\sum_{b\\\\in H_A}p^2_b]}\\\\\\\\\\n\\\\geq 0.$\\n\\nIn above two cases, we have $\\\\Delta E_{x_1} = 0$; thus we have $\\\\Delta E_{x_1} \\\\leq \\\\Delta E_{x_2}$. **That is to say, under either of the two popular assumptions above, we arrive at the same conclusion: 'easy' open-set noise is less harmful than 'hard' open-set noise.**\"}", "{\"title\": \"Response to Reviewer op1H (Part 2/4)\", \"comment\": \"Nevertheless, we want to emphasize that the performance differences between the two robust loss functions and the original cross-entropy loss in the above results are **not significant**. Furthermore, these robust loss functions were **not originally designed to account for open-set noise**. Therefore, we believe further analysis is needed to evaluate the performance of different robust loss functions under open-set noise, and we are very interested in exploring this in future work.\\n\\nThat said, we would like to offer some preliminary insights. We want to point out that these robust loss functions generally only affect the convergence speed but do not alter the fully converged extrema. For instance, in the case of the Symmetric Cross-Entropy (SCE) loss, we have:\\n\\n$\\n\\\\mathcal{L}\\\\_{\\\\text{SCE}} = \\\\alpha \\\\cdot \\\\mathcal{L}\\\\_{\\\\text{CE}} + \\\\beta \\\\cdot \\\\mathcal{L}\\\\_{\\\\text{RCE}}\\n$\", \"where\": \"- $\\\\mathcal{L}_{\\\\text{CE}} = -\\\\sum\\\\_{i=1}^C y_i \\\\log p_i$,\\n- $\\\\mathcal{L}_{\\\\text{RCE}} = -\\\\sum\\\\_{i=1}^C p_i \\\\log y_i$,\\n- $\\\\alpha$ and $\\\\beta$ are weighting coefficients for the two terms.\\n\\n$\\n\\\\frac{\\\\partial \\\\mathcal{L}\\\\_{\\\\text{SCE}}}{\\\\partial z_i} = \\\\alpha \\\\cdot \\\\frac{\\\\partial \\\\mathcal{L}\\\\_{\\\\text{CE}}}{\\\\partial z_i} + \\\\beta \\\\cdot \\\\frac{\\\\partial \\\\mathcal{L}\\\\_{\\\\text{RCE}}}{\\\\partial z_i}\\n$\", \"breaking_it_down\": \"1. Gradient of CE Term:\\n$\\\\frac{\\\\partial \\\\mathcal{L}_{\\\\text{CE}}}{\\\\partial z_i} = p_i - y_i$\\n\\n2. Gradient of RCE Term:\\n$\\n\\\\frac{\\\\partial \\\\mathcal{L}_{\\\\text{RCE}}}{\\\\partial z_i} = \\\\frac{y_i}{p_i} \\\\cdot (1 - p_i)\\n$\\n\\n3. Gradient of SCE Loss:\\n$\\n\\\\frac{\\\\partial \\\\mathcal{L}_{\\\\text{SCE}}}{\\\\partial z_i} = \\\\alpha \\\\cdot (p_i - y_i) + \\\\beta \\\\cdot \\\\frac{y_i}{p_i} \\\\cdot (1 - p_i)\\n$\\n\\nFor the true class ($i = y$):\\n$\\n\\\\frac{\\\\partial \\\\mathcal{L}_{\\\\text{SCE}}}{\\\\partial z_y} = \\\\alpha \\\\cdot (p_y - 1) + \\\\beta \\\\cdot \\\\frac{1}{p_y} \\\\cdot (1 - p_y)\\n$\\n\\nFor all other classes ($i \\\\neq y$):\\n$\\n\\\\frac{\\\\partial \\\\mathcal{L}_{\\\\text{SCE}}}{\\\\partial z_i} = \\\\alpha \\\\cdot p_i + \\\\beta \\\\cdot \\\\frac{0}{p_i} \\\\cdot (1 - p_i) = \\\\alpha \\\\cdot p_i\\n$\\n\\nWe notice that, for both CE loss and SCE loss, their gradients reduce to 0 if and only if $p_i = y_i, \\\\forall i$, which corresponds to the *overfitted case* analyzed in our paper. This implies that with sufficient model capacity and training (as is often the case with modern deep neural networks), the conclusions of our analysis remain valid even when robust loss functions are used.\\n\\n\\n*We have also updated the manuscript accordingly with above new results in `Appendix F. Robust loss functions meet open-set noise`. If the reviewer has any other recommended baseline methods, we would be more than happy to further discuss and include them in the final version.*\\n\\n\\n*[1] Zhang, Zhilu, and Mert Sabuncu. \\\"Generalized cross entropy loss for training deep neural networks with noisy labels.\\\" Advances in neural information processing systems 31 (2018).*\\n\\n*[2] Wang, Yisen, et al. \\\"Symmetric cross entropy for robust learning with noisy labels.\\\" Proceedings of the IEEE/CVF international conference on computer vision. 2019.*\"}", "{\"title\": \"Response to Reviewer rg5p (Part 3/3)\", \"comment\": \"> **Minor issues on presentation.**\\n\\nMany thanks for your careful reading. We would like to express our deep gratitude for your suggestions on the presentation. **We have proofread the paper again and updated the manuscript accordingly based on your suggestions (we kindly refer you to the updated paper).** Specifically, the contributions are condensed into three points as below (Following your suggestion, we removed the dataset contribution and incorporated additional analysis and experiments):\\n- *`We introduce the concept of a complete noise transition matrix, reformulate the Learning with Noisy Labels (LNL) problem to account for open-set noise, and analyze two offline cases: fitted and overfitted.`*\\n- *`We demonstrate that open-set noise generally has less negative impact on classification accuracy than closed-set noise, analyze 'hard' vs. 'easy' open-set noise, propose an out-of-distribution (OOD) detection task for further evaluation, and find entropy-based open-set noise detection effective only for 'easy' open-set noise.`*\\n- *`We conduct preliminary explorations with vision-language models and self-supervised models on identifying and learning with 'hard' open-set noise, expand experiments on the performance of robust loss functions under open-set noise, and analyze their effectiveness in challenging noise scenarios.`*\\n\\n\\n**Thank you once again for your time and valuable feedback, which have helped us improve our work. If there are any additional questions or clarifications needed, please do not hesitate to let us know. We would also kindly ask if you might consider revising your score, should our responses satisfactorily resolve the points you raised. We truly appreciate your consideration and look forward to your further comments.**\"}", "{\"comment\": \"Dear Reviewer rg5p,\\n\\nWe sincerely apologize for reaching out to you again, and we hope this does not disrupt your Thanksgiving holiday if you celebrate it. **As the discussion phase will conclude in two days, we wonder if you have any additional suggestions or concerns. We would be more than happy to engage in further discussions. If our rebuttal has helped to resolve your concerns, we sincerely hope you might consider adjusting the rating\\u2014your support is crucial for this borderline submission.**\\n\\nThank you once again for your time and support. We greatly appreciate your detailed review and valuable feedback on our submission. Your insights have been instrumental in helping us refine and improve the quality of our work. We are truly grateful for your assistance and look forward to your reply.\\n\\nBest regards,\\n\\nAuthors of Submission 4732\"}", "{\"title\": \"Response to Reviewer FqR8\", \"comment\": [\"We regret that the feedback provided was brief and lacked depth. Based on the comments, it appears that the reviewer may not have fully engaged with our paper. In addition to the theoretical analysis and the newly proposed benchmark datasets, **our work also includes extensive experimental validations of the theoretical findings**, specifically addressing the following points:\", \"An empirical comparison between open-set noise and closed-set noise in both the fitted and overfitted cases.\", \"An evaluation of different open-set noise patterns under the same total noise ratio, in both the fitted and overfitted cases.\", \"An analysis of the sensitivity of a representative entropy-based open-set detection method across various open-set noise patterns.\", \"Motivated by constructive feedback from **Reviewer op1H** and **Reviewer rg5p**, we have conducted further analyses, including:\", \"The performance of robust loss functions under open-set noise.\", \"Several potential solutions to address the newly proposed open-set noise scenarios.\", \"Furthermore, we respectfully disagree with the assertion that theoretical analysis and dataset construction are insufficient for publication at ICLR. As in many other areas of research, **rigorous theoretical exploration and the introduction of novel datasets can provide valuable insights and advance the understanding within the machine learning community**. We believe our contributions meet these standards and hope our additional analysis further strengthens our case.\"]}" ] }
Eyv12jjyMN
Attacking for Inspection and Instruction: Attack Techniques Can Aid In Interpretability
[ "Wei Liu", "Zhongyu Niu", "Lang Gao", "Zhiying Deng", "Jun Wang", "Haozhao Wang", "Zhigang Zeng", "Ruixuan Li" ]
This study investigates a self-explantory natural language processing framework constructed with a cooperative game, where a generator first extracts the most informative segment from raw input, and a subsequent predictor utilizes the selected subset for its input. The generator and predictor are trained collaboratively to maximize prediction accuracy. In this paper, we first uncover a potential caveat: such a cooperative game could unintentionally introduce a sampling bias between the explanation and the target prediction label. Specifically, the generator might inadvertently create an incorrect correlation between the selected explanation and the label, even when they are semantically unrelated in the original dataset. Subsequently, we elucidate the origins of this bias using both detailed theoretical analysis and empirical evidence. Our findings suggest a direction for inspecting these correlations through attacks, based on which we further introduce an instruction to prevent the predictor from learning the correlations. Through experiments on six text classification datasets and one graph classification dataset using three network architectures (GRUs, BERT, and GCN), we show that our attack-inspired method outperforms recent competitive methods. We also compare our method against a representative LLM (llama-3.1-8b-instruct), and demonstrate that our approach achieves comparable results, sometimes even surpassing it.
[ "Interpretability", "natural language processing", "feature selection" ]
https://openreview.net/pdf?id=Eyv12jjyMN
https://openreview.net/forum?id=Eyv12jjyMN
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z0m9oOklm7", "oeb4KutygI", "mzlt9i89Ev", "FipcNWelJR", "DKe5kvrZFb" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1731329329213, 1730294075804, 1730533210057, 1733541019208, 1730416184912 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6022/Reviewer_EoKe" ], [ "ICLR.cc/2025/Conference/Submission6022/Reviewer_SpHe" ], [ "ICLR.cc/2025/Conference/Submission6022/Reviewer_N2Uz" ], [ "ICLR.cc/2025/Conference/Submission6022/Authors" ], [ "ICLR.cc/2025/Conference/Submission6022/Reviewer_WyTS" ] ], "structured_content_str": [ "{\"summary\": \"This submission examines a cooperative game framework for interpretability, where a generator identifies important parts of the input (rationale) by maximizing the mutual information with the labels, which the predictor then uses to make predictions. The authors argue that even if the predictive model achieves high accuracy, the generated rationales may be uninformative due to spurious correlations introduced during sampling. To address this, the authors propose an attacker framework that assesses whether the predictor is learning from spurious correlations and instructs the predictor to avoid such degeneracy.\\n\\n---\\n**Disclaimer:** The topic of this paper is totally outside my area of expertise. I requested that the PCs remove me from this submission at the start of the review period but have not received a response. My evaluation below is, therefore, based only on an educated guess.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"To my understanding, the proposed mechanism is both novel and interesting: spurious correlation is not an inherent property of the underlying dataset but rather an artifact of the sampling process conditioned on the generator.\", \"weaknesses\": \"I have the following questions and concerns:\\n\\n1. I am unclear about the experimental setting in Figure 4. If the random patterns are independent of $Y$, why does the validation accuracy (orange) exceed 50%? If these random patterns do contain information relevant to the predictive task, why is the model trained on the full text unable to leverage this information effectively?\\n\\n2. Section 4.2 could benefit from improved clarity. It is not immediately evident why Equation (6) suggests that the attacker \\\"inspects trivial patterns from X.\\\" The intuition in Section 4.3 appears to be that the trivial features are those from which the classifier can also predict the opposite label. If this is the case, how does this approach provide an advantage over simply discarding features with low mutual information with the labels?\\n\\n3. Within the current framework, how does the attacker differentiate patterns that might yield contrasting sentiments depending on the context? For instance, in the example in Figure 9, words like \\\"bitter\\\" could imply positive or negative sentiment based on surrounding context; would such patterns therefore be discarded?\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a new component into the framework of Rationalizing Neural Predictions (Lei et al., 2016), which facilitates interpretable (text) classification. RNPs consist of a generator that selects a part of an input and a predictor that uses this part to classify a label. The first contribution is highlighting that a generator can introduce spurious correlations into the predictor. Second, an attacker network that identifies potential spurious correlations in data and then instructs the predictor not to learn from these trivial patterns is introduced. A well-instructed predictor can give good feedback to the generator\\u2019s selection, which improves the model's robustness / generalization / rationale capabilities.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The main strength of the paper lies in its **first contribution** explained across Sections 1 & 4.1 (+Appendix). Highlighting the new type of spurious correlation stemming from conditioning on the generator is interesting and significant. Knowledge of quite simple statistics facts on conditional independence leads to correcting a rather empirical line of research on interpretable NLP.\\n2. I like all the figures and tables in the paper.\", \"weaknesses\": [\"1. **Idea:** I am not convinced that the **second contribution**, i.e. a practical solution using an attacker, is useful. This paper might try to solve a problem that never even existed in a similar parallel line of work on interpretable (text) classification. What is the, unmentioned in the paper, relation of RNPs to Prototype-based Networks? E.g.\", \"Interpretable and steerable sequence learning via prototypes, KDD 2019\", \"Evaluating explainable AI: Which algorithmic explanations help users predict model behavior?, ACL 2020\", \"ProtoTEx: Explaining model decisions with prototype tensors, ACL Main 2022\", \"Proto-lm: A prototypical network-based framework for built-in interpretability in large language models, EMNLP Findings 2023\", \"ProtoryNet - Interpretable text classification via prototype trajectories, JMLR 2023\", \"Robust text classification: Analyzing prototype-based networks, EMNLP Findings 2024\", \"**Sidenote:** I peeked inside the related papers on \\\"Inter RAT (Yue et al., 2023), CR (Zhang et al., 2023), FR (Liu et al., 2022) and NIR (Storek et al., 2023).\\\" None of them ever mention prototype-based networks (related to \\\"This looks like that ...\\\" NeurIPS 2019) or concept bottleneck models (related to \\\"Concept bottleneck models\\\" ICML 2020), which seems like a systemic weakness of work on RNPs.\", \"2. **Experiments:**\", \"A) Contain no comparisons to popular methods in text classification, e.g. fastText (Bag of tricks for efficient text classification, EACL 2017), or even baseline BERT (without RNP) etc. It is valuable to compare with uninterpretable models to observe the baseline / tradeoff for a broader context on the progress in the field.\", \"B) There is no analysis of the method's efficiency. One can expect to experience the computational drawbacks of training the attacker. How much overhead is added by adding an attacker? How does the convergence / learning speed / sample efficiency compare to the related baselines?\", \"3. **Communicaton:** I believe the paper's title is *very* uninformative considering the paper's contents, to the point that I want to raise this as an issue. First of all, from its beginning, the paper assumes a particular modeling framework called *Rationalizing* Neural Predictions, which is primarily used in *text* / *language* modeling. Both concepts are not clearly represented in the title or abstract, where the two first sentences read more like the paper itself introduces an idea of RNPs. Moreover, the title misses a critical concept discussed throughout the paper, i.e. *spurious correlation*. Emphasizing \\\"attack\\\" two times may be misleading for those who actually work on adversarial ML. \\\"Interpretability\\\" seems vague as both *robustness* (generalization) and *causality* are even more emphasized in the paper. The word \\\"techniques\\\" is uninformative (which techniques? one or many attacks?) and is not even used later in the paper.\"], \"questions\": [\"1. I can suggest emphasizing *discrimination* (cf. GANs) instead of \\\"attacking\\\" across the paper for clarity since no attacking entity is considered in this work. \\\"An attacker\\\" is confusing in the context of actually improving interpretability / robustness.\", \"2. Figure 4: Using the accuracy metric may be misleading as there is no information regarding the class (im)balance. Please either plot a horizontal line showing the class ratio, comment on it in the figure's caption, or plot F1 etc.\", \"3. Assumption 1 becomes unrealistic in scenarios with multiple classes. Binary sentiment classification is an oversimplified example.\", \"4. Overall, I like Section 4.2. But, I disagree with the rationale given in L262\\u2013270 regarding the result in Figure 4:\", \"> \\\"Does this strange result stem from the fact that the 10% randomly selected patterns already contain enough sentiment inclination for classification? The answer is no. [...] We observe that the green line indicates a significantly lower accuracy (about 58%), implying that the randomly selected patterns contain only minimal sentiment information.\\\"\", \"In my opinion, the answer is we don't know:\", \"A) It is probable that the predictor trained using the full texts (green line) itself learns spurious correlations (shortcuts) that are different from these contained in the 10% randomly selected patterns.\", \"B) It is probable that the predictor learns variable interactions, e.g. one important word lies inside the 10%, and another one lies inside the 90%; access to both is required for accurate prediction (rationale).\", \"Thus, the implication seems incorrect.\", \"### Other feedback\", \"L36: introduce the abbreviation for \\\"XAI\\\"\", \"L83\\u201385: \\\"This phenomenon then leads to a trust concern: whether the extracted rationale is really responsible for the label in the original dataset. This problem is important because explanations should also be aligned with their social attribution (Jacovi & Goldberg, 2020; 2021).\\\" I can disagree; explanations don't have to be aligned with their social attribution but rather be faithful to the model. Confirmation bias is a real threat to progress in research on interpretability.\", \"RW: Authors might be interested in a very related work: \\\"Post hoc explanations may be ineffective for detecting unknown spurious correlation\\\" ICLR 2022\", \"L92: What is denoted by the letter \\\"g\\\"; generator? It was never introduced.\", \"L127: typo, missing reference\", \"L193: typo, methods constrain\", \"L232: please rephrase \\\"it will *sometimes* results in *some* problems.\\\"\", \"L242/Fig.3: \\\"a local *of* the causal graph\\\" sounds odd\", \"Eq. 8: missing spaces next to \\\" & \\\"\", \"L281: use another letter instead of \\\"n\\\", which was used before to denote the number of variables (T_1, ..., T_n)\", \"Figure 5: wrong wording in \\\"Attack *to* Inspection and Instruction\\\", do you mean \\\"for\\\" or \\\"as\\\"?\", \"L310: \\\"Inspection\\\" seems to be a new concept introduced here, but the paper's Introduction gives no intuition of what it really means \\\"to inspect\\\". Also, \\\"the trivial patterns learned by the predictor can be inspected through attack\\\" sounds like defining \\\"inspection\\\" by using the word \\\"inspect\\\". I like the sentence in L331 explaining that \\\"an attacker can identify uninformative trivial patterns and classify them into the opposite class.\\\" and can recommend moving this explanation to the beginning of Sec. 4.3 or even Introduction.\", \"L342: wrong wording in \\\"The situation of a text X contains\\\", do you mean \\\"if\\\", \\\"when\\\", or \\\"containing\\\"?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper studies the problem of spurious correlations introduced in self-explaining rationalizing framework. The generator that selects the relevant text for interpretability could rely on trivial and spurious text patterns when used in conjunction with a text classification model (predictor). The paper proposes an attacker to first learn a label-agnostic text selection model, which is then used to attack the predictor to check for adversarial robustness. The attacker is then used in adversarial training to minimize the reliance on the spurious text patterns and thus improves the generator/predictor accuracy as measured on 2 text classification tasks and 1 graph classification task.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper studies a problem of relying on spurious text correlations in interpretability methods and demonstrates that it is an issue in existing rationalizing methods\", \"Experiments on a diverse set of tasks demonstrate that there is headroom in improving accuracy through an interpretable method\"], \"weaknesses\": \"* The method is comparable to adversarial training of the classifier [1, 2, 3] where prior research has shown that adversarial training improves the robustness of the classifier. In that regard, the distinction between this work and related work in adversarial training is missing.\\n* The theoretical exposition is unclear - with R defined in the causal diagram but unused in the equation. The training paradigm and the intended causal diagram that is being enforced is unclear.\\n* Figure 2b shows that predictor trained on random selections show higher accuracy (orange) than when the predictor was trained on the full text but given random selections of text (green). Further the gap between blue and orange is only 10-15%, which indicates that the test setup is not robust enough to begin with. This has to be explained, further - in Table 1 & 2, the accuracy where any rationalization is not used, but the full text is used for classification should be provided\\n\\n[1] https://aclanthology.org/P18-1099/\\n[2] https://dl.acm.org/doi/10.1145/3278721.3278779 \\n[3] https://aclanthology.org/P19-1561/\", \"questions\": \"Discussion on the results with respect to the full-text baseline and related work in adversarial training would help increase the score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This study investigated a self-explaining framework constructed with a cooperative game, which includes a generator and predictor for rationale discovery and tasks prediction. The authors proposed to inspect spurious correlations through attacks, and provide guidelines to stop the predictor from learning these correlations.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This study introduced a two-step method for detecting spurious correlations: inspection and instruction. The authors first developed an attacker model to identify the trivial patterns that result in spurious rationale. Subsequently, they proposed a strategy to inhibit the predictor from acquiring such correlations.\", \"weaknesses\": \"The authors claimed that their method distinguishes from existing causality research for spurious correlation. Instead of focusing on spurious correlations inherent in the data, they are looking for spurious correlations from the selection process of the generator. However, this claim appears ambiguous and requires further explanation. Typically, if a correlation is spurious and does not derive from the data itself, it is often due to model bias and hence, is model-specific. Yet, this paper also posits that their proposed method is model agnostic. The authors should elucidate these claims in their paper.\\n\\nConsidering the aforementioned concern, the contribution of this paper appears to be not significant. Furthermore, the approach of using attack techniques to identify spurious correlations has been previously explored in the literature, such as in \\\"Mitigating Backdoor Poisoning Attacks through the Lens of Spurious Correlation\\\" (https://arxiv.org/pdf/2305.11596).\\n\\nThe impact of this work is small. It is not an end-to-end tool, but instead an add-on step for existing tools.\", \"questions\": \"There are several grammar and format errors such as:\", \"line_128\": \"?\", \"line_234\": \"\\\"Note that independent doesn\\u2019t lead to conditional independent:\\\"\", \"figure_3\": \"\\\"A local of the causal graph ...\\\"\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
EytBpUGB1Z
Retrieval Head Mechanistically Explains Long-Context Factuality
[ "Wenhao Wu", "Yizhong Wang", "Guangxuan Xiao", "Hao Peng", "Yao Fu" ]
Despite the recent progress in long-context language models, it remains elusive how transformer-based models exhibit the capability to retrieve relevant information from arbitrary locations within the long context. This paper aims to address this question. Our systematic investigation across a wide spectrum of models reveals that a special type of attention heads are largely responsible for retrieving information, which we dub retrieval heads. We identify intriguing properties of retrieval heads:(1) universal: all the explored models with long-context capability have a set of retrieval heads; (2) sparse: only a small portion (less than 5\%) of the attention heads are retrieval. (3) intrinsic: retrieval heads already exist in models pretrained with short context. When extending the context length by continual pretraining, it is still the same set of heads that perform information retrieval. (4) dynamically activated: take Llama-2 7B for example, 12 retrieval heads always attend to the required information no matter how the context is changed. The rest of the retrieval heads are activated in different contexts. (5) causal: completely pruning retrieval heads leads to failure in retrieving relevant information and results in hallucination, while pruning random non-retrieval heads does not affect the model's retrieval ability. We further show that retrieval heads strongly influence chain-of-thought (CoT) reasoning, where the model needs to frequently refer back the question and previously-generated context. Conversely, tasks where the model directly generates the answer using its intrinsic knowledge are less impacted by masking out retrieval heads. These observations collectively explain which internal part of the model seeks information from the input tokens. We believe our insights will foster future research on reducing hallucination, improving reasoning, and compressing the KV cache.
[ "Large language models", "long context", "interpretability", "attention" ]
Accept (Oral)
https://openreview.net/pdf?id=EytBpUGB1Z
https://openreview.net/forum?id=EytBpUGB1Z
ICLR.cc/2025/Conference
2025
{ "note_id": [ "r93bv6e3Jw", "oslLAtYLFh", "mWdKPkQwfZ", "f4INwbTN7H", "Z2rLr9sB4h", "XRdXzMHojZ", "JvwEnj51NN", "DNCrZ4uhx1", "6xK9xmlO3B", "3z2EneKbdE", "3hdlgLpcCY", "3J2ye6aNf0", "20HPzK14Tg" ], "note_type": [ "official_review", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730791453781, 1732098477210, 1730700700152, 1734383744399, 1732511666340, 1732561072482, 1731995416231, 1730755211211, 1737523520490, 1730605048182, 1731997177736, 1732096134038, 1732550566447 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2659/Reviewer_Br4p" ], [ "ICLR.cc/2025/Conference/Submission2659/Authors" ], [ "ICLR.cc/2025/Conference/Submission2659/Reviewer_F46b" ], [ "ICLR.cc/2025/Conference/Submission2659/Area_Chair_Synt" ], [ "ICLR.cc/2025/Conference/Submission2659/Authors" ], [ "ICLR.cc/2025/Conference/Submission2659/Reviewer_xfgX" ], [ "ICLR.cc/2025/Conference/Submission2659/Authors" ], [ "ICLR.cc/2025/Conference/Submission2659/Reviewer_ZPPp" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2659/Reviewer_xfgX" ], [ "ICLR.cc/2025/Conference/Submission2659/Authors" ], [ "ICLR.cc/2025/Conference/Submission2659/Authors" ], [ "ICLR.cc/2025/Conference/Submission2659/Authors" ] ], "structured_content_str": [ "{\"summary\": [\"This paper investigates the mechanism with which transformer-based language models \\\"retrieve\\\" information in the long context. It experimented with four model families, six model scales, and three types of post-training variants, and reveals that a special type of attention heads are largely responsible for retrieving information (either copy-paste or paraphrase) from long contexts. Such attention heads are named \\u201cretrieval heads\\u201d. The authors find that these retrieval heads\", \"(1) exist in all the explored models,\", \"(2) are only 5% of the attention heads,\", \"(3) exist in models large-scale-pretrained with long or only short contexts and remain the same when the models are continually pretrained on longer contexts,\", \"(4) are dynamically activated given different contexts, and\", \"(5) will cause degradation in retrieval abilities or chain-of-thought abilities if pruned.\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-written, with a clear overarching research question (\\u201dHow do transformer-based language models acquire long-context capabilities?\\u201d), the substantial findings of the existence of \\u201cretrieval heads\\u201d and their properties, and experiments to support each finding.\", \"The experiments are extensively conducted on LLaMA, Yi, Qwen, and Mistral model families, at various scales from 6B to 8x7B, on base and chat models, and model dense and Mixture-of-Experts models.\", \"To identify which attention head is contributing to the retrieval from contexts, the authors proposed a novel retrieval score to measure the frequency of a head\\u2019s copy-paste behavior during autoregressive decoding. This retrieval score is both analyzed in different models and used to guide the experiments that prune or mask retrieval heads to understand the causal importance of retrieval heads.\", \"The authors also considerately report empirical results that these identified retrieval heads are activated during paraphrasing, QA, CoT reasoning tasks, and not just in copy-paste tasks.\"], \"weaknesses\": [\"It\\u2019d facilitate reading to clarify that \\\"$k$ is a sentence that is irrelevant to $x$\\\" in L146, instead of, for example, a short phrase or a single word. Can add a reference to Figure 2 so that readers see an example.\", \"The paper misses dataset details (L195, L355, L427). Are NIAH samples created manually or by prompting large language models? What datasets are used to begin with in Sec 2-4? What additional evaluation tests did you create in Sec 4.1-4.3?\", \"The paper misses experimental details, such as prompts used, links to existing assets used, etc.\", \"The questions below need to be addressed.\"], \"questions\": [\"L156: By $\\\\mathcal{R}$, do you mean real numbers? If so, perhaps use $\\\\mathbb{R}$ instead and clarify that $a$ refers to unnormalized attention scores.\", \"Figure 3: Seems that in fact less that 1% of attention heads are activated more than 50% times. The 5% in the caption could probably be changed to 1%.\", \"L194: Does it happen that the model generates a word $w$ that is not the next token in the needle $k$ that should be generated? If this happens, do you skip the example? Or consider that as a case when all attention heads do not perform a copy-paste operation, even if an attention head actually pays the most attention to the token that should be generated next?\", \"L203: What do you mean by \\u201cstabilize\\u201d and \\u201cconverge\\u201d? Please either provide definitions or plots to illustrate.\", \"Figure 7: Could be nice to include the dataset name in the caption.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer F46b\", \"comment\": \"We thank the reviewer for the detailed comments and the support! Regarding questions:\\n\\n>1. Are retrieval heads consistent across different architectures? Does the role of retrieval heads vary with different transformer architectures (e.g., decoder-only vs. encoder-decoder models), or are these properties universally applicable?\\n\\nWe tend to believe that retrieval heads are consistent across architectures and the current experimental results on a variety of transformer variants supports this hypothesis. Specifically, we have conducted experiments with:\\n\\n- Grouped-query attention (e.g., Qwen1.5),\\n- Mixture of Experts models (e.g., Mixtral), and\\n- Hybrid models incorporating state-space layers (e.g., Jamba, Please refer to the comments above titled **[Update on Experiments] We also found Jamba** for more details).\\n\\nAs today's mainstream models are decoder-only, we currently did not include encoder-decoder architecture. Our belief is that retrieval heads is a property emerge with attention layers, so as long as the architecture has at least one layer of global attention, we tend to believe there will be retrieval heads within it, either decoder-only or encoder-decoder. \\n\\n\\n>2. How does the model determine when retrieval heads should be dynamically activated? Is there a mechanism or threshold within the model that dictates when these retrieval heads become active, especially in different contexts?\\n\\nWe are also fascinated by how exactly the firing / triggering of retrieval happens within the architecture. Currently we are unable to ping point an triggering mechanism -- or maybe they are intrinsically hard to identify because attention is basically dot product between two vectors, and whether retrieval happens basically trace back to the exact numbers of the key-value vectors, which are outputs of previous layers, thus a little bit hard to determine. \\n\\nWe envision that future research could adopt approaches similar to the ongoing exploration of the [\\\"physics of LLMs\\\"](https://physics.allen-zhu.com), starting with smaller models and synthetic datasets to isolate individual model behavior.\\nFor example, keeping all other factors the same, control only one aspect of the data, such that one model can do retrieval and another model cannot do. We would be excited to see how this would be studied in future work.\"}", "{\"summary\": \"This paper identifies and analyzes the properties of retrieval heads, a specialized type of attention head primarily responsible for retrieving information. Key properties include:\\n\\n1. **Universality** \\u2013 retrieval heads are present across all explored models.\\n2. **Sparsity** \\u2013 only a small subset of attention heads serve this retrieval function.\\n3. **Intrinsic nature** \\u2013 these heads exist in pretrained models, even those trained on short contexts.\\n4. **Dynamic activation** \\u2013 their activation varies depending on specific tokens and contexts.\\n5. **Causality** \\u2013 pruning these heads leads to significant performance drops.\\n\\nBy examining the influence of retrieval heads across various tasks, these findings shed light on which internal model components actively seek information from input tokens.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Provides in-depth analysis and extensive experiments on various properties of retrieval heads.\\n\\n2. Well-written, with clear graphics, easy-to-follow explanations, and a well-organized structure.\\n\\n3. Numerous examples and case studies effectively illustrate the properties, making the concepts easy to understand.\", \"weaknesses\": \"Nothing major needs to be addressed. Please address some discussion questions in question sections.\", \"questions\": \"1. Are retrieval heads consistent across different architectures? Does the role of retrieval heads vary with different transformer architectures (e.g., decoder-only vs. encoder-decoder models), or are these properties universally applicable?\\n\\n2. How does the model determine when retrieval heads should be dynamically activated? Is there a mechanism or threshold within the model that dictates when these retrieval heads become active, especially in different contexts?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper investigates how transformer-based models extract relevant information from long context. It identifies a specific type of attention head, named retrieval head, which plays a significant role in the information retrieval process. The authors demonstrate how they detect retrieval heads and describe their characteristics through various experimental settings. Additionally, they conducted experiments that involved pruning the retrieval heads to show that these heads are essential for recalling specific information amidst vast amounts of data. Their findings indicate that retrieval heads are crucial for extractive question-answering and chain-of-thought reasoning.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers agree on accepting this paper based on its soundness and novelty. Concerns have been generally addressed in the rebuttal.\"}", "{\"title\": \"Response to reviewer ZPPp\", \"comment\": \"We appreciate the reviewer\\u2019s detailed comments and support. Below are our responses:\\n\\n**Diversity of needle-haystack**\\n\\n> The main concern with the paper is the lack of details on the benchmark used to generate the needle-and-haystack pairs. The paper does not clarify how these pairs are created, the diversity of pairs (e.g., in topic, token variety)\\n> \\n\\nIn our initial submitted version before the rebuttal, we use three sets of needle manually written by our authors. These needles are: \\n\\n1. Needle: A new report from the WMO shows that records were once again broken, and in some cases smashed, for greenhouse gas levels, surface temperatures, ocean heat and acidification. Question: What does a new report from WMO shows?\\n2. Needle: The best thing to do in Beijing is to take a walk in Chaoyang Park and have a cup of Espresso in the evening. Question: What is the best thing to do in Beijing?\\n3. Needle: Mr Green is disliked by everyone because he is a mean person and also he can't ride a horse or dive a car. Question: Why does everyone dislikes Mr Green?\\n\\nThe haystacks are randomly sampled documents from Slimpajama. \\n\\nSince the reviewer is concerned about the diversity of the needle, we conduct a follow-up experiment to demonstrate that the detection results of retrieval head do not change when increasing the diversity of needles. Specifically, we use the above three cases as in-context examples to a language model, and ask the language model to generate 100 more examples. By doing this we get a synthetic data of 1.2K unique tokens spanning 10 topics (Technology, Transportation, Education, Festivals, Health .etc). Below are three examples of the synthetic data: \\n\\n1. Needle: The newly constructed SkyBridge connects three cities\\u2014Everdale, Pinehurst, and Riverpoint\\u2014allowing citizens to travel with ease, admire scenic views, and save significant commute time. Question: What does the newly constructed SkyBridge connect and offer?\\n2. Needle: The upcoming GalaxyFest will feature over 100 sci-fi authors, 50 exclusive book signings, and a virtual reality experience of Mars colonization. Question: What features will the upcoming GalaxyFest have?\\n3. Needle: The Aurora Conservatory is renowned for its collection of rare Arctic flora, cutting-edge climate research, and eco-friendly glass dome architecture. Question: Why is the Aurora Conservatory renowned?\\n\\nRepeating the retrieval head detection algorithm on Mistral 7B with our newly generated 100 needles, we get the same set of retrieval heads as before. This is to say, we confirm that out conclusion holds when scaling the number of needles from 3 to 100. We have added the details in appendix Figure 14. \\n\\n**Other important comments**\\n\\n1. We added the clarification noting \\u201cKV\\u201d means \\u201ckey-value\\u201d in the last line of abstract. \\n2. We elaborated the meaning of Figure 3 in the caption by explaining the color and the sparsity of retrieval heads. \\n3. We added the elaboration of Figure 5. Note that Figure 5 is complimentary with Figure 6 because it not only shows chat model and base model share the same set of retrieval heads, but also shows that these heads are mostly within middle layers. \\n4. We have added examples of masking out retrieval heads v.s. random heads for paraphrasing and question answering in the updated appendix Figure 15. These examples consistently demonstrate the influence of retrieval heads in downstream tasks.\"}", "{\"title\": \"Thanks for addressing my question\", \"comment\": \"Thanks for addressing my question. It is nice to see that the properties of the retrieval head also apply to paraphrasing and QA tasks. I think it is a good paper. I will maintain my score of 8.\"}", "{\"title\": \"[Update on Experiments] We also found Jamba has retrieval heads\", \"comment\": \"We are excited to observe that Jamba, a hybrid model combining **state-space layers (Mamba)**, mixture-of-experts (MoE), and a limited number of attention layers (4 layers \\u00d7 32 heads = 128 total attention heads), also exhibits retrieval heads.\\n Interestingly, these retrieval heads within the attention layers appear to play a key role in Jamba\\u2019s retrieval capabilities. \\nBelow, we provide a comparison of **masking top-k retrieval heads** versus **random heads** in [Jamba-v0.1](https://huggingface.co/ai21labs/Jamba-v0.1) (12B active parameters, with a total of 52B parameters across all experts) oon needle-in-a-haystack experiments using the same settings as Figure 7 in our paper:\\n| Masking Head number | 0 | 2 | 5 | 10 | 15 | 20 | 30 | 50 | 100 |\\n|-------------------|-----|------|------|------|------|------|------|------|------|\\n| Masking Random Head | 100 | 99.1 | 98.0 | 94.7 | 90.3 | 85.1 | 70.3 | 44.1 | 9.0 |\\n| Masking Top Retrieval head| 100 | 98.1 | 38.5 | 61.9 | 33.0 | 15.3 | 10.3 | 12.9 | 3.5 |\\n\\nThese results provide strong evidence supporting our hypothesis that \\u201cfull attention is crucial for effective long-context information retrieval\\u201d (L477) in the section \\u201cRelationship to Local and Linear Attention and State-Space Models\\u201d. Masking top retrieval heads significantly impacts retrieval performance for Jamba, confirming their essential role in maintaining model capabilities.\\nWe will incorporate these updated results into Figure 7, along with comparisons to other models, to further strengthen our findings.\\n\\nPlease refer to Section 7.1 in the appendix to see the new figure with Jamba.\"}", "{\"summary\": \"The paper provides a systematic examination of a specific type of attention head, termed \\\"retrieval heads,\\\" which primarily handle information retrieval from input data. It introduces an approach based on the Needle-In-a-Haystack (NIAH) setup to empirically identify retrieval attention heads across various transformer-based architectures. The findings demonstrate that:\\n1) retrieval heads are present across a diverse set of models,\\n2) only a small subset of attention heads function as retrieval heads,\\n3) these retrieval heads exist even in models pretrained with limited context, suggesting they are an intrinsic artifact of pretraining,\\n4) they are dynamically activated rather than continuously active, and\\n5) there is a causal link between retrieval heads and the model\\u2019s capability to retrieve relevant information.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"Overall, the paper is well-structured, the assumptions are clear, and, with a few exceptions listed in the \\\"weaknesses\\\", the methodology is clear. The results and experiments robustly support the authors\\u2019 claims.\", \"weaknesses\": \"The main concern with the paper is the lack of details on the benchmark used to generate the needle-and-haystack pairs. The paper does not clarify how these pairs are created, the diversity of pairs (e.g., in topic, token variety), or the validation methods used. Including these details would provide reviewers with valuable insights into the experimental design, helping them better assess the generalizability and universality of the findings.\", \"additional_minor_points\": \"1) The acronym \\\"KV\\\" is not defined anywhere in the paper. Based on the context, it likely stands for \\\"Key-Value,\\\" but this should be explicitly stated.\\n2) The caption for Figure 3 could benefit from significant revision. It\\u2019s challenging to interpret without detailed reference to the discussion, so adding clarifying information in the caption itself would help.\\n3) Figure 5 is difficult to interpret; it\\u2019s unclear what is being visualized. Overlaying the heatmaps for comparison could enhance clarity, or, if this visualization is redundant given the results in Figure 6, consider omitting it.\\n4) The discussion in Section 4, in particular, would benefit from qualitative examples (perhaps as an appendix) to illustrate and substantiate claims related to downstream tasks.\", \"questions\": \"Please check the list of minor points in \\\"weaknesses\\\" section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}", "{\"summary\": \"The paper investigates how transformer-based models extract relevant information from long context. It identifies a specific type of attention head, named retrieval head, which plays a significant role in the information retrieval process. The authors demonstrate how they detect retrieval heads and describe their characteristics through various experimental settings. Additionally, they conducted experiments that involved pruning the retrieval heads to show that these heads are essential for recalling specific information amidst vast amounts of data. Their findings indicate that retrieval heads are crucial for extractive question-answering and chain-of-thought reasoning.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"The paper shows originality by exploring the retrieval capabilities of transformer language models, a topic that has not been extensively studied.\", \"The paper is well-organized and easy to follow. The paper first defines the special type of attention head that plays a significant role in recalling relevant information during generation. Then, it successfully proves the existence of the retrieval heads within Needle-in-a-Haystack (NIAH) task and further illustrates several properties of the retrival heads which are quite interesting.\", \"Figures are clear and intuitive to comprehend\", \"The paper is impactful as it proposes prospective research directions involving retrieval heads.\"], \"weaknesses\": [\"The work is limited to the Needle-in-a-Haystack (NIAH) task. Although NIAH is a good task to prove the existence of the retrieval heads, we do not know if the similar findings and significance would transfer to other tasks where the LM needs to paraphrase or utilize the previous context (not just copy-and-paste), which are more complex and closely related to real-world applications.\"], \"questions\": [\"It would be better to show that similar findings transfer to paraphrasing tasks.\", \"More appendix figures for evaluation results of Retrieval Head Detection Algorithm on various settings (line 199)\", \"Minor fixes\", \"line 247-250 repetition\"], \"figure_9_caption_typo\": \"needels\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Update on Experimental Details\", \"comment\": \"We thank all reviewers for their valuable feedback regarding the experimental details. In response to these comments, we have updated the paper to include the required details. Also we elaborate these information below:\\n\\n### **1. NIAH experiments details:**\", \"before_rebuttal\": [\"We manually construct / write four sets of (question, answer) pairs that are semantically irrelevant to the long document (\\\"haystack\\\"). The long documents for all four sets are randomly sampled from the publicly available\\u00a0[SlimPajama](https://huggingface.co/datasets/yaofu/slimpajama-per-source-length-upsample/viewer/default/train).\", \"We use three sets of (question, answer) pairs for retrieval head detection (i.e., calculating the retrieval scores discussed in Section 2), and reserve one set of (question, answer) pair for testing (generating Figure 1).\", \"A maximum sequence length of 50K tokens is used for retrieval head detection, while the full 128K token length is used during testing. This ensures that the retrieval head generalizes to longer lengths than detecting them.\"], \"the_needles_for_retrieval_head_detection_are\": \"1. Needle: A new report from the WMO shows that records were once again broken, and in some cases smashed, for greenhouse gas levels, surface temperatures, ocean heat and acidification. Question: What does a new report from WMO shows?\\n2. Needle: The best thing to do in Beijing is to take a walk in Chaoyang Park and have a cup of Espresso in the evening. Question: What is the best thing to do in Beijing?\\n3. Needle: Mr Green is disliked by everyone because he is a mean person and also he can't ride a horse or dive a car. Question: Why does everyone dislikes Mr Green?\", \"during_rebuttal\": \"Since reviewer ZPPp is concerned about the diversity of the needle, we use the above three cases as in-context examples to a language model, and ask the language model to generate 100 more examples. By doing this we get a synthetic data of 1.2K unique tokens spanning 10 topics (Technology, Transportation, Education, Festivals, Health .etc). Below are three examples of the synthetic data. The full needle set is uploaded as a supplementary material: \\n\\n1. Needle: The newly constructed SkyBridge connects three cities\\u2014Everdale, Pinehurst, and Riverpoint\\u2014allowing citizens to travel with ease, admire scenic views, and save significant commute time. Question: What does the newly constructed SkyBridge connect and offer?\\n2. Needle: The upcoming GalaxyFest will feature over 100 sci-fi authors, 50 exclusive book signings, and a virtual reality experience of Mars colonization. Question: What features will the upcoming GalaxyFest have?\\n3. Needle: The Aurora Conservatory is renowned for its collection of rare Arctic flora, cutting-edge climate research, and eco-friendly glass dome architecture. Question: Why is the Aurora Conservatory renowned?\\n\\nRepeating the retrieval head detection algorithm on Mistral 7B with our newly generated 100 needles, we get the same set of retrieval heads as before. This is to say, we confirm that out conclusion holds when scaling the number of needles from 3 to 100. We have added the details in appendix Figure 14. \\n\\nWe apologize for any confusion caused by the initial omission of these details and have updated the paper to include this information. Our full reproducible code and data is open-sourced, and we will reveal the link after the anonymity period.\\n\\n### **2. Other Experiments and Prompts**\\nAs our primary focus is the fundamental properties of retrieval heads, we do not emphasize prompt engineering, opting instead for the simplest possible prompts.\\nThe datasets used, excluding **ExtractiveQA** (Details are mentioned in section 4.2 and we will open source it) and **NIAH**, are publicly available:\\n| **Dataset** | **Prompt & Source** |\\n|--------------------|--------------------------------------------------------------------------------------------------------------------------------------|\\n| **Musique** | Prompt and data from [LongBench](https://huggingface.co/datasets/yanbingzheng/LongBench). The CoT version adds: \\\"please first think step by step.\\\" |\\n| **GSM8K and MMLU** | Prompt and data from [Chain-of-thought-hub](https://github.com/FranxYao/chain-of-thought-hub). |\\n\\nWe hope these clarifications address the reviewers' concerns\"}", "{\"title\": \"Response to Reviewer Br4p\", \"comment\": \"We thank the reviewer for the detailed comments. Below we note:\\n\\n**About missing details:**\\n>1. The paper misses dataset details (L195, L355, L427). Are NIAH samples created manually or by prompting large language models? What datasets are used to begin with in Sec 2-4? What additional evaluation tests did you create in Sec 4.1-4.3?\\n\\n> The paper misses experimental details, such as prompts used, links to existing assets used, etc.\\n\\nWe appreciate the comment on missing details. Please see our **[Update on Experimental Details]**, where we have included comprehensive descriptions of the datasets, methods for creating NIAH samples, and evaluation tests. We would also be happy to provide further follow-up explanations. These updates will also be incorporated into the revised version of our paper. Our full reproducible code and data is open-sourced, and we will reveal the link after the anonymity period. \\n\\n**About $R$:**\\n > L156: By $R$, do you mean real numbers? If so, perhaps use $\\\\mathbb{R}$ instead and clarify that refers to unnormalized attention scores.\\n\\nYes, $R$ refers to real numbers, and $\\\\mathbb{R}$ is indeed more precise. We have updated the notation and clarified its meaning in the revised paper.\\n\\n**Question on Retrieval Head detection:**\\n> L194: Does it happen that the model generates a word that is not the next token in the needle that should be generated? If this happens, do you skip the example? Or consider that as a case when all attention heads do not perform a copy-paste operation, even if an attention head actually pays the most attention to the token that should be generated next?\\n\\n- Our retrieval head detection is specifically targeting the copy-paste behavior. So yes, if the model generates a tokens that is not in the needle, we do not count it, no matter whether the attention head is attending to the target token or not. \\n- That being said, since the needle sentence contain multiple tokens, to identify retrieval heads the model does not necessarily need to generate all of the tokens in the needle. Say if a needle \\\"best place to visit in SF ...\\\" contain 20 tokens, as long as the model copy-paste a fair portion of the tokens, say 14 out of 20, this level of copy-paste is enough for us the identify how strong/ frequently an attention head is doing retrieval.\\n\\n\\n>L203: What do you mean by \\u201cstabilize\\u201d and \\u201cconverge\\u201d? Please either provide definitions or plots to illustrate.\\n\\n- Since heads detected with strong retrieval scores in one context may not exhibit strong retrieval scores in another context, so we conduct retrieval head detection over multiple (question, answer pairs) and multiple long-contexts (haystacks). \\n- As we increase the number of trials for detecting retrieval heads, \\u201cstabilize\\u201d and \\\"converge\\u201d refers to the ranking of attention heads (based on retrieval scores) becoming consistent across repeated trials of retrieval head detection. \\n- We have added definitions and included supporting plots in Section 7.2, Figure 13, in the updated Appendix.\\n\\n**Writings**\\n> It\\u2019d facilitate reading to clarify that \\\" is a sentence that is irrelevant to \\\" in L146, instead of, for example, a short phrase or a single word. Can add a reference to Figure 2 so that readers see an example.\\n\\n>Figure 3: Seems that in fact less that 1% of attention heads are activated more than 50% times. The 5% in the caption could probably be changed to 1%.\\n\\n> Figure 7: Could be nice to include the dataset name in the caption.\\n\\nThanks for the suggestions and we have modified the paper in in our new revision.\"}", "{\"title\": \"Response to reviewer xfgX\", \"comment\": \"We thank the reviewer for their support and detailed comments! The reviewer is mainly concerned how the model behavior would transfer to other tasks like paraphrasing. Here we note:\\n\\n**Experiments on paraphrasing task** \\n\\nBelow we show an example about how masking out retrieval heads breaks the model\\u2019s understanding of semantic dependency while masking out random non-retrieval heads does not:\\n\\n- Input: The Whispering Forest of Lunthera, home to bioluminescent **insects**, is famed for its **murmuring trees** and an ancient legend about lost travelers **finding their way home**.\\n- Mask 30 retrieval: Lunthira\\u2019s Whispering Woods, home to luminescent **creatures**, is renowned for its **lulling trees** and a mythical tale about lost adventurers who happened upon the forest by chance.\\n- Mask 30 random: Lunthera\\u2019s Whispering Forest, abounding in luminescent insects, is renowned for its conversing trees and a fable about lost travelers finding their way home.\", \"in_this_example\": \"if one mask out the retrieval head, the model outputs \\u201cluminescent **creatures**\\u201d without specifying they are **insects**; it outputs **lulling trees** while the input says **murmuring trees**. The input also says lost travelers \\u201cfind their way home\\u201d, but this information is missed. In contrast, masking out random heads do not have these problems\", \"below_is_another_example_showing_how_masking_out_retrieval_heads_make_the_model_hallucinate_about_information_that_does_not_exist_in_the_input\": [\"Input: The glowing sands of the Duskveil Desert, enriched with **rare minerals**, shimmer in the dark and are said to hold heat for days after the sun sets.\", \"Mask 30 retrieval: In the Dusk Veil Desert, void of life and **composing largely of glass**, the sand is imbued with solar energy and preserves heat under the twilight.\", \"Mask 30 random: The verdant sands of the Duskveil Desert, **composed of rare minerals**, linger with a glowing glow at sunset and are said to retain heat for days.\", \"In this example, masking out retrieval heads makes the model saying \\u201c**composed of rare minerals**\\u201d while the input says \\u201c**rare minerals**\\u201d. Masking out random heads do not have these problems.\", \"In the updated paper\\u2019s Appendix Figure 15, we give more examples about how masking out retrieval heads influence model\\u2019s behavior on paraphrasing and question answering, while masking out random non-retrieval heads do not significantly change model\\u2019s behavior.\", \"**Other important comments**\", \"We fixed the repetition sentence in section 3.\", \"We fixed the typo \\u201cneedel\\u201d in Figure 9.\"]}" ] }
EyaH1wzmao
The Ramanujan Library - Automated Discovery on the Hypergraph of Integer Relations
[ "Itay Beit Halachmi", "Ido Kaminer" ]
Fundamental mathematical constants appear in nearly every field of science, from physics to biology. Formulas that connect different constants often bring great insight by hinting at connections between previously disparate fields. Discoveries of such relations, however, have remained scarce events, relying on sporadic strokes of creativity by human mathematicians. Recent developments of algorithms for automated conjecture generation have accelerated the discovery of formulas for specific constants. Yet, the discovery of connections between constants has not been addressed. In this paper, we present the first library dedicated to mathematical constants and their interrelations. This library can serve as a central repository of knowledge for scientists from different areas, and as a collaborative platform for development of new algorithms. The library is based on a new representation that we propose for organizing the formulas of mathematical constants: a hypergraph, with each node representing a constant and each edge representing a formula. Using this representation, we propose and demonstrate a systematic approach for automatically enriching this library using PSLQ, an integer relation algorithm based on QR decomposition and lattice construction. During its development and testing, our strategy led to the discovery of 75 previously unknown connections between constants, including a new formula for the `first continued fraction' constant $C_1$, novel formulas for natural logarithms, and new formulas connecting $\pi$ and $e$. The latter formulas generalize a century-old relation between $\pi$ and $e$ by Ramanujan, which until now was considered a singular formula and is now found to be part of a broader mathematical structure. The code supporting this library is a public, open-source API that can serve researchers in experimental mathematics and other fields of science.
[ "Continued Fractions", "Mathematical Constants", "Integer Relations", "Experimental Mathematics", "Riemann Zeta Function", "Irrational Number", "PSLQ", "AI In Mathematics", "Automated Conjecture Generation" ]
Accept (Poster)
https://openreview.net/pdf?id=EyaH1wzmao
https://openreview.net/forum?id=EyaH1wzmao
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zC7bjVdHI6", "rDZV3pNBrG", "qsGAOLg888", "pKJXe14BUx", "pFqX1K1E2i", "oDpjo1Vw1i", "kKRGzWKE11", "YNfFs4l8nk", "UyDBWklx6I", "QgYkIx8rjV", "QDSbnwGA0P", "LISClyDFwe", "IdOy2f7exS", "I93oZ3n1Hx", "8SKTLMSKF8" ], "note_type": [ "official_comment", "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1732184944870, 1734582200390, 1730809576092, 1730638644751, 1732479630015, 1732185425978, 1732185321764, 1732752705702, 1730662679922, 1732736626638, 1732466197786, 1732185303343, 1737524162324, 1732185414153, 1732184872322 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12034/Authors" ], [ "ICLR.cc/2025/Conference/Submission12034/Area_Chair_WFGB" ], [ "ICLR.cc/2025/Conference/Submission12034/Reviewer_cctr" ], [ "ICLR.cc/2025/Conference/Submission12034/Reviewer_AmTb" ], [ "ICLR.cc/2025/Conference/Submission12034/Reviewer_Bz6b" ], [ "ICLR.cc/2025/Conference/Submission12034/Authors" ], [ "ICLR.cc/2025/Conference/Submission12034/Authors" ], [ "ICLR.cc/2025/Conference/Submission12034/Authors" ], [ "ICLR.cc/2025/Conference/Submission12034/Reviewer_Bz6b" ], [ "ICLR.cc/2025/Conference/Submission12034/Reviewer_cctr" ], [ "ICLR.cc/2025/Conference/Submission12034/Reviewer_AmTb" ], [ "ICLR.cc/2025/Conference/Submission12034/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12034/Authors" ], [ "ICLR.cc/2025/Conference/Submission12034/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response part 2 to Reviewer cctr\", \"comment\": \"* __From my understanding, perhaps inherent to the problem the authors are trying to solve, until a proof is given, it is almost never guaranteed that any new relations found are actually correct. This makes the problem rather ill-defined since there are no performance metric attributed to the proposed methodology, making it hard, almost impossible, to judge its effectiveness, outside of engineering perspective.__\\n\\nWe thank the referee for bringing up this point. Regarding a performance metric, we use a quantifiable method involving the number of digits of accuracy in a discovered conjecture, compared to the amount of integer digits used. This approach provides a measure for predicting the likelihood for the correctness of each discovered formula. \\nWe quantify this approach using a metric we call Return on Investment (RoI). For example, consider the formulas we show in figure 3, each with RoI in the thousands. Our algorithm assumes a minimal RoI of 2, which is already good enough to separate significant formulas from noise. This is why the problem is no longer ill-defined, but follows a clear performance metric.\\nWe updated section 2.1 so it references the following section 3, where we discuss RoI in detail and provide experiments that justify its use in practice.\\n\\nAs additional evidence of the reliability of our approach, we also provide 24 formulas that we succeeded in proving, presented in appendix E. We focused there on cases of especially slow convergence, for which the numerical precision was limited to only 10-20 digits. These proofs help validate our approach and show that the results can be relied on in future research efforts. We are working in parallel with mathematicians on general mathematical approaches for proofs that can be applied on scale for the large number of newly discovered formulas.\\n\\nLet us add more generally about the importance of finding new unproven formulas (or generally conjectures): The value of such discoveries in mathematics cannot be understated, as it is usually such a conjecture that acts as the first step in mathematical research, which eventually leads to a discovery of a new theory. As an example, consider Srinivasa Ramanujan\\u2019s contributions to mathematics, many of which were initially conjectures and were not proven by him, yet his impact on the mathematical world is undeniable. In a similar way, the conjecture generation algorithm in our work provides new leads for mathematical research that can have long-term impact.\\nHistorically, conjecturing often relied on human intuition or creativity, and only recently has it benefited from automation. This is where our contribution lies, within the field of automated conjecture generation.\\n\\n__Overall, this is still an interesting paper, even as an engineering paper so I still advocate for publication just for the code and methodology to be published for more to see.__\\n\\nWe thank the referee for the kind words. The code is currently available in the supplementary material, and will be replaced with a Github link in the camera-ready version as well. We also added a tutorial to the README in the code, showing queries against the database provided within.\\n* __Line 42, I believe it is quite a stretch to claim that any computer assisted proof (which comprises of the majority of citations in this sentence are examples of \\\"usage of AI as a scientific tool\\\"__\\n\\nWe thank the referee for bringing this up. This terminology was used in the literature in the old days, where the definition of AI was broader. We revised this sentence to now say \\u201cusage of computer algorithms as scientific tools\\u201d.\\n* __Line 101-104, it took me awhile to parse the sentences here. This can be solved by explicitly stating the definition of C-transform and how it differs from continued fraction since at first glance and without definition, it's not easy to distinguish. Alternatively, one can also just give an example of the difference. An example of why this is confusing is: it seems that C-transform, as stated on arbitrary function f_n, captures all continued fraction; yet some continue fractions cannot be converted to infinite sums when C-transforms can?__\\n\\nWe apologize for the confusion we have caused with these lines. Continued fractions, in general, generalize infinite sums. Certain continued fractions can be converted to simple infinite sums, but the majority cannot be. C-transforms are used as a canonical representation that can be applied to any continued fraction, in a way that unifies infinitely many continued fractions. Thus C-transforms are also more expressive than infinite sums. The updated text now reflects this, and includes a more explicit definition of C-transforms.\\n* __What is the question mark in table 2?__\\n\\nThis question mark denotes a case of a C-transform for which there is currently no formula that predicts its error at a given evaluation depth. In the updated text, the question mark is replaced with an \\u201cN/A\\u201d, which is mentioned in the caption.\"}", "{\"metareview\": \"The main claim of the paper is an approach for learning a hypergraph of relations between integers. This can be useful for people working in number theory to discover new relationships that were not known or known to the researcher. As the reviewers note, this is not a typical paper for this community, but there was significant interest in the ideas presented.\\n\\nThe strengths were that the ROI scheme may prove to be generalizable to other domains/applications. The introduction of new ideas and problems to the ICLR community was viewed as a significant strength.\\n\\nThe weaknesses were that that the paper may not be viewed as significant to those outside the domain/field of number theory. The clarity of the exposition was also a cause of concern for some reviewers.\\n\\nThe effort of the authors to connect this work to the ICLR community. They seem to have made use of the technology the community has developed, adapted it, and improved it for their use. While the reviews for this paper had a wide range, there seems to have been sufficient interest and consensus that the claims rest firmly on the evidence to support acceptance. The concerns about exposition and clarity have been taken seriously by the authors and the paper has benefited greatly from the advice, comments and engagement of the reviewers.\", \"additional_comments_on_reviewer_discussion\": \"A question that arose in the discussion was what the differences are between the hypergraphs in this paper and traditional knowledge graphs. Here the reviewers were very helpful to articulate the differences. The authors are encouraged to describe the relationship between their work and traditional knowledge graphs so that the ICLR community can better place this work into context. The reviewers also helped to contextualize the contributions of the work.\\n\\nThe authors, appropriately, focused on the concerns about clarity and exposition. This seems to be a topic that is outside of the focus of most of the community, but has drawn interest from the reviewers. I expect that others in the community will find similar interest in the paper even if it doesn't immediately have a direct impact on their day-to-day work. There is value in pushing the boundaries in pure mathematics where they come into contact with the core competencies of the ICLR community. I think the reviews indicate the authors have made such an effort.\"}", "{\"summary\": \"The paper introduce a new search algorithm for integer polynomial relations between mathematical constants. The authors organize their algorithm into a library and use a hypergraph as data structure to store the relations. The library can interact with a relation finding machine to search for new relations.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Strengths:\", \"The paper claims to have discovered new integer relations between constants that have not been discovered before. This is an exciting and relevant direction for publication in ICLR since machine assisted mathematics has many unrevealed potentials.\", \"The methodology is sound and correct, since it is a combination of previously published and peer-reviewed systems.\", \"The ROI scheme, while simple, may prove to be generalizable to other domains/applications.\", \"The resulting code seems to be available publicly.\", \"The paper is mostly written in a clear and comprehensive way, with appropriate figures and tables for demonstration.\"], \"weaknesses\": [\"Some weaknesses include:\", \"There is almost no machine learning (emphasis on the 'learning' part) in the paper, as it seems to be an engineering/database product that interfaces with existing code base. This is, of course, not a weakness on the content of the paper, but on its fit for the venue of publication.\", \"The use of hypergraphs as data structure, while theoretically clean, is perhaps only practically useful if the code can make use of graph symmetries (for instance in a graph learning framework). Due to the lack of practical ways to distinguish sets vs sequence in hardware, I very much doubt storing relations as hypergraphs is much more practically advantageous than any naive data structure. Of course, some data structure is needed to store the relations, but the authors also claimed that this is an \\\"effective representation\\\" (line 72) and overall feature it as one of their main contributions.\", \"From my understanding, perhaps inherent to the problem the authors are trying to solve, until a proof is given, it is almost never guaranteed that any new relations found are actually correct. This makes the problem rather ill-defined since there are no performance metric attributed to the proposed methodology, making it hard, almost impossible, to judge its effectiveness, outside of engineering perspective.\", \"Overall, this is still an interesting paper, even as an engineering paper so I still advocate for publication just for the code and methodology to be published for more to see.\"], \"questions\": [\"Some other minor comments and questions:\", \"Line 42, I believe it is quite a stretch to claim that any computer assisted proof (which comprises of the majority of citations in this sentence are examples of \\\"usage of AI as a scientific tool\\\"\", \"Line 101-104, it took me awhile to parse the sentences here. This can be solved by explicitly stating the definition of C-transform and how it differs from continued fraction since at first glance and without definition, it's not easy to distinguish. Alternatively, one can also just give an example of the difference. An example of why this is confusing is: it seems that C-transform, as stated on arbitrary function f_n, captures all continued fraction; yet some continue fractions cannot be converted to infinite sums when C-transforms can?\", \"What is the question mark in table 2?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a new library for automatically discovering functional relations between mathematical constants. The paper outlines the structure of the library as well as the numerical approaches to discovering (polynomial) functional dependencies between fundamental mathematical constants. The authors discuss the convergence properties of the selected approach as well as some limitations and future directions.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"I am not an expert in the area but I enjoyed reading the paper. There are quite a few things I like about it:\", \"Fundamental mathematical constants are fascinating, they are often the cornerstones in many scientific disciplines. Discovering new complex relations between them can inspire interdisciplinary discoveries.\", \"The presented approach for the representation of organizing the relations in a hypergraph and searching for new relations using numerical methods is innovative.\", \"The implementation is publicly available and likely to benefit the broader scientific community.\", \"The most time-consuming algorithm is embarrassingly parallel, so it is likely scalable.\"], \"weaknesses\": [\"I think the presentation can be improved. The paper is interesting but at times it felt more like reading an article from Quanta magazine. Here are a few suggestions:\", \"Include a more thorough description of the setup. How was the hypergraph initially created, how many constants and relations were used, how much did the graph expand when new relations were discovered, etc? Some tables and figures can help.\", \"Include some explanation about operating the library. For example, how can one include new constants and start searching for relations? I think some high-level pseudocode will help.\", \"Add some formal results about the validity of results and rate of convergence. I think the discussion in section 3 can be organized in a more formal way using some lemmas. This will help the reader concentrate on the results and maybe think about improvements.\", \"Maybe provide examples with a few more constants, I see only $\\\\pi$, $e$ and $\\\\zeta$.\"], \"questions\": \"Would it be feasible and useful to create a knowledge graph that contains information about the constants and the relations between them? Initially, it will contain human/LLM written explanations, e.g. a short text with references to research papers that introduce the constant or relation and how it is used. When new relations are discovered, the graph will be automatically updated with a short description of the discovery process including computational statistics. The edges of the graph can be labeled with the different relation types (e.g. linear, polynomial, etc.) and/or the certainty of the discovered connection. Knowledge graphs are very popular and this would allow the usage of third-party software for a better visualization and understanding of the discovery process.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your detailed answer.\\nYour answer makes clearer how your algorithm works, and allows me to start thinking about its implications.\\nWhile the revisions clearly improve the text, I'm still concerned about the quality of the explanation (e.g., it seems still unclear to the reader what the user can exact do or not do with the algorithm/software, e.g., what is possible input, what parameters can be selected, what are considerations to set parameters ...) and the lack of analysis (e.g., I understand that RoI allows somehow to assess how surprising a relation is depending on the description length of the discovered equation and its precision, but it is unclear whether there is proven theory supporting this notion and what assumptions are made, e.g., it is also important to consider the number of conjectures considered as considering more potential equations increases the probability of incorrectly assigning an equation a too low RoI. Also, there isn't much information on practical runtime and problem sizes, next to a few asymptotic upper bounds as in appendix C).\"}", "{\"title\": \"Response part 2 to Reviewer AmTb\", \"comment\": \"**Questions:**\\n\\n**Would it be feasible and useful to create a knowledge graph that contains information about the constants and the relations between them? Initially, it will contain human/LLM written explanations, e.g. a short text with references to research papers that introduce the constant or relation and how it is used. When new relations are discovered, the graph will be automatically updated with a short description of the discovery process including computational statistics. The edges of the graph can be labeled with the different relation types (e.g. linear, polynomial, etc.) and/or the certainty of the discovered connection. Knowledge graphs are very popular and this would allow the usage of third-party software for a better visualization and understanding of the discovery process.**\\n\\nWe thank the referee for this insightful question. Presenting our hypergraph of relations as a knowledge hypergraph may be a superior presentation for researchers in knowledge representation, and augmenting it with written explanations and relevant papers is a long-term goal of the Ramanujan Library. We have updated the text in section 6 to add discussion of the hypergraph as a knowledge hypergraph.\"}", "{\"title\": \"Response part 2 to Reviewer Bz6b\", \"comment\": \"* **the precision is defined as log(epsilon), i guess this can lead to problems if epsilon happens to be exactly 0 ?**\\n\\nWe thank the referee for this deduction. In theory, $\\\\varepsilon=0$ implies a true integer relation with no inaccuracy, whose precision is $+\\\\infty$. However, in practice, we may replace $\\\\varepsilon$ with the numerical inaccuracy of the least precise constant involved in the relation, thus avoiding this issue. This is now clarified in the text.\\n\\n* **line 119: \\\"we define its degree as the sum of all exponents in each monomial\\\": does this imply you consider homogeneous polynomials, as else the different monomials may have a different sum of exponents?**\\n\\nWe thank the referee for spotting this mistake. The text is now amended to specify the *greatest* sum of all exponents in each monomial.\\n\\n* **line 125: is an edge a hyperedge?**\\n\\nWe apologize for the confusion. Yes, an edge is a special case of a hyperedge, but can also be used interchangeably, as we have in the text. In the updated paper, we now primarily use the term edge, simplifying the text.\\n\\n* **line 127: please define transitivity in this context**\\n\\nWe now define transitivity in the place where it is mentioned.\\n\\n* **line 153: please define \\\"type\\\"**\\n\\nWe apologize for the confusion the word may have caused in this situation. The text has been rewritten to refer to arbitrary partitioning of the space of constants, without using this word.\\n\\n* **line 155: the expression \\\"product space on certain subtypes\\\" is very unclear.**\\n\\nWe thank the referee for this comment. The text has been rewritten to be clearer about how the product space is created.\\n\\n* **line 159: it is unclear what is a combined constant, or what language or equations the user can use to define them.**\\n\\nWe apologize for the confusion we may have caused here. Truthfully, $\\\\pi e$ is as valid a choice of constant as any other. We removed the word *combined* from the relevant sentence in the text.\\n\\n\\nAltogether, we were glad to revise the manuscript in all the aspects suggested by the referee. These improvements helped clarify it and we are grateful for these suggestions.\\n\\n**Section 5 may have provided useful insight in capabilities of the proposed system,if the used notations would have been clear.**\\n\\nSection 5 makes use of notation we have defined in section 2. The text is now clarified with a reference to the relevant definition.\\n\\n**Making abstraction of that problem i observe the displayed equations are rather complicated and hence the search space of possible conjectures must be huge. Given one searches only approximate equalities under bounded precision it seems likely one will discover many incorrect conjectures.**\\n\\nWe are thankful for the referee expressing concern regarding this issue. The likelihood of a conjecture being incorrect is something we tackle in our paper using a metric we call Return on Investment (RoI). This approach allowed us to identify conjectures that are extremely unlikely to be incorrect. These are the conjectures presented in our manuscript. The RoI metric expresses how unlikely it is for a conjecture of a certain precision to be obtained given the number of integer digits used. This metric is an essential part of our algorithm, as it is how conjectures that are likely to be true are automatically distinguished from noise.\\nSection 5 in the updated text now references the relevant section 3, where the exact definition of RoI, along with experiments empirically showing its validity, can be found.\\n\\n**Questions:**\\n**It may help to answer questions in my detailed comments, even if that is not guaranteed to clarify everything.**\\n\\nWe were glad to answer these questions and revise the manuscript accordingly. If there are any other points to clarify, please let us know.\"}", "{\"comment\": \"**Thanks for your detailed answer. Your answer makes clearer how your algorithm works, and allows me to start thinking about its implications.**\\n\\nWe thank the referee for considering our answer, and are glad to know that it helped with the understanding of our work.\\n\\n**While the revisions clearly improve the text, I'm still concerned about the quality of the explanation (e.g., it seems still unclear to the reader what the user can exact do or not do with the algorithm/software, e.g., what is possible input, what parameters can be selected, what are considerations to set parameters ...)**\\n\\nWe thank the referee for reviewing the updated manuscript and for raising this concern. The Ramanujan Library provides an API with several functionalities:\\n* Drawing information from the database, including constants and integer relations. Examples include:\\n * `db.constants` retrieves all \\u201cfamous\\u201d constants.\\n * `db.relations()` retrieves all integer relations.\\n* The `identify` function, used for numerical identification tasks.\\n* Contributing new relations to the database is done primarily through running our search algorithm, which directly uploads its results to the database.\\n\\nThese capabilities are now explained in the README found in the supplementary material.\\n\\n**[,,,] lack of analysis (e.g., I understand that RoI allows somehow to assess how surprising a relation is depending on the description length of the discovered equation and its precision, but it is unclear whether there is proven theory supporting this notion and what assumptions are made**\\n\\nWe would like to take this opportunity to provide a demonstration of Return on Investment (RoI) and how it can be used to detect and/or reject integer relations in practice:\\n\\nConsider the constant $C\\\\left[\\\\frac{-n^6}{4n^6 + 483n^4 + 14763n^2 - 3721}\\\\right] \\\\approx 0.9999131740\\u2026$. In appendix F we list this as part of a conjectured relation connecting it to $\\\\zeta(3)$. Depending on the precision used, PSLQ will provide different conjectures:\\n\\nUsing 60 binary digits, PSLQ will provide the conjecture $C\\\\left[\\\\frac{-n^6}{4n^6 + 483n^4 + 14763n^2 - 3721}\\\\right] = \\\\frac{114491\\\\zeta(3)-30033}{36579\\\\zeta(3)+63631}$. Regardless of whether this conjecture is true, RoI immediately allows one to dismiss it as likely being noise, since the RoI evaluates to approximately $0.67$.\\n\\nIncreasing the binary precision to 110, PSLQ provides the same conjecture we claim in the paper (in appendix F, row 15 in the table):\\n\\n$$C\\\\left[\\\\frac{-n^6}{4n^6 + 483n^4 + 14763n^2 - 3721}\\\\right] = \\\\frac{216000}{13176000\\\\zeta(3) - 15622283}$$\\n\\nHowever, at this point it is not yet notable, as the RoI is still $1.15$.\\n\\nFinally, increasing the binary precision to 445, PSLQ still provides the same conjecture, but with the increased precision it is now a much more viable conjecture, providing an RoI of $5.9$ as we state in the paper. At this point, there is high enough confidence that PSLQ will not change the conjecture it finds even if one were to increase the precision further, implying that this conjecture is likely to be true.\", \"this_example_points_to_a_general_approach_for_getting_arbitrarily_high_confidence_in_the_correctness_of_a_conjectured_formula\": \"one can go on increasing the binary precision of PSLQ, and show that the same formula keeps emerging with higher and higher RoI measures.\\nNote that this demonstration can be thought of as a special case of the large-scale experiments that we show in figures 3 and 7. Under this lens, one can interpret the gradual increase of binary precision as gradually denoising the integer relation, pushing it further away from results typical of random noise.\\n\\n**[...] considering more potential equations increases the probability of incorrectly assigning an equation a too low RoI.**\\n\\nReturn on Investment is determined exactly by the accurate digits involved in an integer relation. RoI may appear low when one constant is significantly less accurate than the other constants involved. This can happen when such a constant is provided by a formula with slow convergence. This is an unavoidable problem that requires either finding formulas with significantly better convergence, or brute force computation of the existing formula. Fortunately, RoI is easy to recalculate once more digits are available.\\n\\n**[...] there isn't much information on practical runtime and problem sizes**\\n\\nWe thank the reviewer for this suggestion. We clarified the text in section 5 to mention that the total compute hours that we ran the algorithm to obtain our results is about 16 compute months, while mentioning the embarrassingly parallel nature of our algorithm, which means the runtime in practice goes down with the number of compute cores used. This means that the practical runtime is as low as one has access to large amounts of CPU cores.\"}", "{\"summary\": \"This proposes an improved method to discover equations\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Automated equation discovery is interesting and providing a library with basic tools is certainly useful.\\nThe text uses good english.\", \"weaknesses\": [\"The text only provides a high level explanation of a few ingredients of the proposed method. While it may be understandable for a few specialists, the text isnt accessible to the general logic or reasoning expert.\", \"The text doesnt introduce the basics of the considered formalism, does not offer specifications nor examples of inputs or outputs of the system.\", \"#### Supplementary material\", \"The code contains a Readme file but it does not comprehensively explain how to install,or use the code. Despite looking around to the several files i didnt succeed to use it nor understand it.\", \"#### details\", \"when defining hypergraph, make clear whether you consider directed / ordered or undirected edges.\", \"Sec 2 mentions giving cnstants such as $\\\\pi$ or $e$ explicitly but does not say how to give an irrational number explicitly.\", \"please specify how one can define a C teansform (as it contains an infinite number of parameters)\", \"while libe 108 talks about polynomial relations, line 111 only gives the form of a linear relation\", \"line 115 requires thar the absolute value of the linear expression equals exactly $\\\\epsilon$. I suppose you mean $\\\\le$ instead.\", \"the precision is defined as $\\\\log(\\\\epsilon)$, i guess this can lead to problems if $\\\\epsilon$ happens to be exactly 0 ?\", \"line 119: \\\"we define its degree as the sum of all exponents in each monomial\\\": does this imply you consider homogeneous polynomials, as else the different monomials may have a different sum of exponents?\", \"line 125: is an edge a hyperedge?\", \"line 127: please define transitivity in this context\", \"line 153: please define \\\"type\\\"\", \"line 155: the expression \\\"product space on certain subtypes\\\" is very unclear.\", \"line 159: it is unclear what is a combined constant, or what language or equations the user can use to define them.\", \"Section 5 may have provided useful insight in capabilities of the proposed system,if the used notations would have been clear.\", \"Making abstraction of that problem i observe the displayed equations are rather complicated and hence the search space of possible conjectures must be huge. Given one searches only approximate equalities under bounded precision it seems likely one will discover many incorrect conjectures.\"], \"questions\": \"It may help to answer questions in my detailed comments, even if that is not guaranteed to clarify everything.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response and for addressing my questions. I am increasing my score to an 8.\"}", "{\"comment\": \"I thank the authors for their responses and I maintain my score. Good luck with the paper!\"}", "{\"title\": \"Response part 1 to Reviewer Bz6b\", \"comment\": \"**The text only provides a high level explanation of a few ingredients of the proposed method. While it may be understandable for a few specialists, the text isnt accessible to the general logic or reasoning expert. The text doesnt introduce the basics of the considered formalism, does not offer specifications nor examples of inputs or outputs of the system.**\\n\\nWe thank the referee for this comment. We improved the manuscript in two aspects: (1) We have rewritten the text so it is more accessible, and better presents the reasoning. (2) We provide concrete basic examples of inputs and outputs of the system, both at the level of populating the hypergraph, and at the level of automated discovery of a single edge.\\n\\n(1) To make the text more accessible, we revised the introduction so it now contains a paragraph providing a high-level explanation of our work and its contribution, explaining the general logic. To summarize this logic here, mathematical constants are normally associated with unrelated scientific disciplines. However, it is possible to relate them through mathematical formulas, leading to surprising connections. One such connection is the solution to the Basel problem, namely $\\\\zeta(2)=\\\\pi^2/6$. Another is the following formula by Ramanujan, featuring an unusual combination of $\\\\pi$ and $e$:\\n$$\\\\sqrt{\\\\frac{\\\\pi e}{2}}=\\\\cfrac{1}{1+\\\\cfrac{1}{1+\\\\cfrac{2}{\\\\ddots+\\\\cfrac{n}{1+\\\\ddots}}}} +\\n1+\\\\frac{1}{3}+\\\\frac{1}{15}+\\\\cdots+\\\\frac{1}{(2n-1)!!}$$\\nOne contribution of our work is to automatically discover and catalogue such relations between constants, resulting in a hypergraph of integer relations. As more relations are added to this hypergraph, it better captures the full network of formulas that connect the constants.\\n\\n(2) To provide concrete examples for inputs and outputs of the system, we updated section 2.1 so it first explains the algorithm as it operates from a totally disconnected hypergraph of integer relations, and later shows how the algorithm can also accept a partially filled-in hypergraph, using its existing relations to save time.\\n\\nIn addition, the caption of table 3 has been extended with an example, showing how the relation $C\\\\left[\\\\frac{-2n^4}{9n^4 - 3n^2 + 1}\\\\right] = \\\\frac{1}{2G}$ can be found using PSLQ:\\n\\n_\\u201cAssuming a polynomial with degree $2$ and order $1$, and given Catalan's constant $G$ and $C\\\\left[\\\\frac{-2n^4}{9n^4 - 3n^2 + 1}\\\\right]$, PSLQ can find integers $n_1,n_2,n_3,n_4$ such that $n_1C\\\\left[\\\\frac{-2n^4}{9n^4 - 3n^2 + 1}\\\\right]G + n_2C\\\\left[\\\\frac{-2n^4}{9n^4 - 3n^2 + 1}\\\\right] + n_3G + n_4 = 0$. In this case, one can find $n_1=2, n_2=0, n_3=0, n_4=-1$, and given that both $C\\\\left[\\\\frac{-2n^4}{9n^4 - 3n^2 + 1}\\\\right]$ and $G$ are known to high precision, this is a signal that an integer relation exists. The other relations here can be captured in a similar way, using more complex integer polynomials.\\\"_\\n\\n**Supplementary material**\\n\\n**The code contains a Readme file but it does not comprehensively explain how to install,or use the code. Despite looking around to the several files i didnt succeed to use it nor understand it.**\\n\\nWe thank the referee for expressing interest in the supplementary material. We have improved the README contained within to give more detailed instructions for installing Python and using the code.\\n\\n**details**\\n* **when defining hypergraph, make clear whether you consider directed / ordered or undirected edges.**\\n\\nWe now clarify that our hypergraph is undirected, when it is first introduced in section 2.\\n\\n* **Sec 2 mentions giving cnstants such as pi or e explicitly but does not say how to give an irrational number explicitly.**\\n\\nWe thank the referee for this comment. In theory, one may use irrational numbers symbolically, but in practice one provides the digits of any such number up to a chosen precision. This is now clarified in the text.\\n\\n* **please specify how one can define a C teansform (as it contains an infinite number of parameters)**\\n\\nWe thank the referee for this comment. We now clarify in the text that a C transform is defined using an arbitrary complex sequence $f_n$. In practice, $f_n$ will be generated from a rational function, which means that the space of all such C transforms is indeed (countably) infinite.\\n\\n* **while libe 108 talks about polynomial relations, line 111 only gives the form of a linear relation**\\n\\nWe thank the referee for this observation. The text has been clarified so it introduces the concept of polynomial relations gradually, through integer relations.\\n\\n* **line 115 requires thar the absolute value of the linear expression equals exactly epsilon. I suppose you mean <= instead.**\\n\\nWe apologize for the confusion we may have caused in our wording. We in fact define and use $\\\\varepsilon$ as the exact numerical error, so it is not an inequality. The text is now rewritten to reflect this:\\n$\\\\varepsilon:=|a_1x_1+a_2x_2+...|\\\\geq 0$\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response part 1 to Reviewer AmTb\", \"comment\": \"**I think the presentation can be improved. The paper is interesting but at times it felt more like reading an article from Quanta magazine. Here are a few suggestions:**\\n* **Include a more thorough description of the setup. How was the hypergraph initially created, how many constants and relations were used, how much did the graph expand when new relations were discovered, etc? Some tables and figures can help.**\\n\\nWe thank the referee for the constructive questions. We updated the text to address these questions:\\nSection 2.1 now explains the algorithm, starting from a totally disconnected hypergraph of integer relations, and later showing how the algorithm can also accept a partially filled-in hypergraph, using its existing relations to save time.\\nSection 5 now specifies that we obtained our results after starting from a totally disconnected hypergraph, containing the constants of our interest. That is, our setup had no relations when initializing the hypergraph. As a consequence of this, the hypergraph was expanded by 118 edges, each corresponding to a relation, all of which are documented in the paper.\\nThe hypergraph figure appears at the end of the text, after being discussed in Section 5. In its presentation, we omitted \\\"redundant\\\" relations for clarity. Additional tables in the main text and appendices summarize the relations (see tables 2 and 3 for a few examples, and appendix F for a full listing).\\n\\n* **Include some explanation about operating the library. For example, how can one include new constants and start searching for relations? I think some high-level pseudocode will help.**\\n\\nWe now provide a detailed README providing instructions on how to install Python and operate our library, along with some simple code snippets for retrieving information about the hypergraph of integer relations. The supplementary material also contains the code we used for searching relations on scale, and instructions on how to find relations on one set of constants through our `identify` feature. Both of these are described in the README.\\n\\n* **Add some formal results about the validity of results and rate of convergence. I think the discussion in section 3 can be organized in a more formal way using some lemmas. This will help the reader concentrate on the results and maybe think about improvements.**\\n\\nRegarding the validity of results, our work relies on our Return on Investment (RoI) metric to quantify how unlikely a conjecture is to be false. We justify the use of RoI through experiments that show statistical evidence for how RoI may be used in practice. The updated text now better reflects this in section 6.\\n\\nAs an additional formal contribution that strengthens the validity of our results, we also provide 24 formulas that we succeeded in proving, presented in appendix E. We focused there on cases of especially slow convergence, for which the numerical precision was limited to only 10-20 digits. These proofs help validate our approach and show that the results can be relied on in future research efforts. We are working in parallel with mathematicians on general mathematical approaches for proofs that can be applied on scale for the large number of newly discovered formulas.\\n\\nRegarding the rate of convergence, we followed the suggestion by the referee and now present it as a formal conjecture instead of as a table. We also highlight that certain special cases of this conjecture are proven (see Ben David et al., 2024). Interestingly, the conjecture we pose here is more powerful than the previous results and will hopefully attract mathematicians to pursue the required proof. We agree that this way of placing the results can help organize our findings and clarify what future efforts are required. We thank the referee for this suggestion.\\n\\n* **Maybe provide examples with a few more constants, I see only pi, e and zeta.**\\n\\nWe replaced one of the results involving $e$ in table 3 with a result involving Catalan\\u2019s constant $G$, and updated the text in section 5 to more explicitly mention $ln2$ and the Lemniscate constants that it presents.\"}", "{\"title\": \"Response part 1 to Reviewer cctr\", \"comment\": \"* __There is almost no machine learning (emphasis on the 'learning' part) in the paper, as it seems to be an engineering/database product that interfaces with existing code base. This is, of course, not a weakness on the content of the paper, but on its fit for the venue of publication.__\\n\\nWe thank the referee for raising this issue. Our work is concerned with automated conjecture generation in number theory, which has so far proven difficult to penetrate with classical machine learning and neural network techniques. This field is uniquely challenging even within mathematics, where automated theorem proving (rather than conjecture generation) has seen much more successful applications of machine learning methods such as reinforcement learning (e.g., Fawzi et al. \\u201cDiscovering faster matrix multiplication algorithms with reinforcement learning\\u201d, Nature 610, 47 (2022)). Still, such methods remained ineffective for conjecture generation in number theory.\\n\\nIn that regard, our work is one of a few pioneering efforts, finding successful algorithms for conjecture generation. Despite not relying on classical machine learning or neural networks, the systematic data-driven research and the use of advanced lattice-based algorithms mark our work as the most advanced large-scale effort ever executed for finding new relations between constants in number theory. As such, it is a pioneering work that can attract attention from the wider ICLR community to this new, untapped domain.\\n \\nAs a historical example, consider the simple alpha-beta pruning used in the Stockfish chess AI. Despite now being outdated, the impact of Stockfish nevertheless marks the beginning in pushing the field of chess engines forward, serving as a benchmark for future engines.\\nOne way we foresee our work being used is as such a benchmark, supporting the development of hybrid approaches to conjecture generation. The hypergraph of integer relations that we proposed and built can be now fed to a neural network as training data. The relevant data is created here systematically for the first time.\\n\\nRegarding the emphasis on the \\u201clearning\\u201d part, we would like to note that as the hypergraph of integer relations is populated, future runs of our algorithm become more efficient, as they know where computation is not needed anymore. This results in a learning process, where the computer learns about the hypergraph of integer relations and uses it when looking to further expand on it. This point has been clarified in the updated manuscript, at the end of section 2.1.\\n\\nThis is why we propose this work for ICLR. We hope that through the motivation provided in this work, the wider audience of AI researchers will help the mathematics community to develop new techniques for automated conjecture generation.\\n\\n* __The use of hypergraphs as data structure, while theoretically clean, is perhaps only practically useful if the code can make use of graph symmetries (for instance in a graph learning framework). Due to the lack of practical ways to distinguish sets vs sequence in hardware, I very much doubt storing relations as hypergraphs is much more practically advantageous than any naive data structure. Of course, some data structure is needed to store the relations, but the authors also claimed that this is an \\\"effective representation\\\" (line 72) and overall feature it as one of their main contributions.__\\n\\nWe thank the referee for bringing up this point. What makes the hypergraph the most effective representation is the fact that integer relations naturally admit a hypergraph structure, and so we chose to organize them in this way. i.e., the hypergraph is the natural structure that emerges, and thus it is used to store the relations (while it is not itself used in the algorithm).\\n\\nThis presentation lends itself to exploring transitivity properties of the hypergraph. Namely, given two relations, what other relations can be constructed using them? This point is clarified in the updated text, as we now define this notion of transitivity in the updated section 2.\"}" ] }
EyW92b6DyY
Retrieval Augmented Imputation using Data Lake Tables
[ "Chenyu Yang", "Yuyu Luo", "Chuanxuan Cui", "Ju Fan", "Chengliang Chai", "Nan Tang" ]
Data imputation is an essential problem in many data science applications. Existing methods often struggle to impute missing values in scenarios where there is a lack of sufficient data redundancy. In this paper, leveraging large language models (LLMs) and data lakes, we propose a novel approach for retrieval-augmented imputation called RAI, utilizing fine-grained tuple-level retrieval instead of traditional coarse-grained table-based retrieval. RAI addresses the challenges of retrieving relevant tuples for missing value imputation from a data lake, where tuples have heterogeneous attributes, diverse values, and missing values. Rather than simply searching for similar tables, RAI employs a tuple encoder to learn meaningful representations for capturing tuple similarities and differences, enabling effective identification of candidate tuples. The retrieved results are further refined by a tuple reranker. We also introduce a new benchmark, mvBench, to advance further research. Extensive experiments demonstrate that RAI significantly outperforms existing methods. We conduct extensive experiments, demonstrating that RAI significantly outperforms state-of-the-art table-based retrieval-augmented imputation methods by 10.7%.
[ "data imputation", "dense retrieval", "contrastive learning" ]
https://openreview.net/pdf?id=EyW92b6DyY
https://openreview.net/forum?id=EyW92b6DyY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ppOZFD0Icd", "lhqtVau0Rq", "Pg658HLYXO", "EE1HjHFi6m" ], "note_type": [ "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730718533530, 1733217107231, 1730595835350, 1729993715348 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6504/Reviewer_gySx" ], [ "ICLR.cc/2025/Conference/Submission6504/Authors" ], [ "ICLR.cc/2025/Conference/Submission6504/Reviewer_D1Lo" ], [ "ICLR.cc/2025/Conference/Submission6504/Reviewer_6hNh" ] ], "structured_content_str": [ "{\"summary\": \"This paper studies data imputation using LLMs. It proposes an RAG-based solution called RAI, consisting of a tuple encoder and a tuple reranker. A benchmark named mvBench is also proposed. The experiments on the benchmark demonstrate the effectiveness and the superiority over state-of-the-art imputation methods.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"S1. The construction of the training dataset is interesting.\\n\\nS2. The paper features a benchmark, which can be useful for future research. \\n\\nS3. The experiments are extensive, with promising experimental results presented.\", \"weaknesses\": \"W1. The targeted problem (imputation only) is less significant compared to the tasks LLMs are often used for. LLMs (e.g., Table-GPT) can handle various table tasks. It is unclear how the proposed techniques generalize to those other than imputation.\\n\\nW2. The design of the proposed techniques is routine. Table retrieval for RAG has been explored in previous works, as state in the submission, rendering the proposed framework less novel. The main contribution resides in its tuple encoding and the construction of the training dataset, while the reranking is rather straightforward. \\n\\nW3. While there are words like \\\"efficient\\\" and \\\"efficiently\\\" in the introduction, I did not find any efficiency evaluation in this paper. Lack of such evaluation might compromise the proposed method's practical use in real-world applications because data lakes are usually heterogenous, noisy, and very large. \\n\\nW4. Only GPT models are evaluated. It is unknown how the proposed method benefits other models, especially open models, which I believe are more useful for handling large-scale business data due to privacy concerns.\", \"questions\": \"Q1. I wonder how Challenge 3 (enhancing reasoning with domain knowledge) is addressed in the paper. It seems that the LLM's reasoning ability is simply used here.\\n\\nQ2. Have you observed the usefulness of your RAG method in other tasks? I think the techniques are quite general and can be applied to other table tasks as well, but they are not explored in the submission.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"This paper studies an RAG approach. I didn't find anything that needs an ethics review.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The authors propose Retrieval-Augmented Imputation (RAI), a method for filling missing values in tables by retrieving relevant tuples from a data lake, particularly effective in cases with limited data redundancy. The method utilizes a tuple encoder and a tuple reranker to find tuples containing information related to the missing value. Additionally, the authors propose a method to enhance retrieval accuracy by augmenting the training dataset. Specifically, they augment the training data by modifying the caption, attribute, and value of the tuples. Lastly, they present mvBench, a benchmark for retrieval-augmented imputation, mvBench, to facilitate further research. RAI achieves a 10.7% improvement over existing state-of-the-art methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"S1. The authors address an important issue in data science applications: imputing missing values in tables.\\n\\nS2. The authors show that their proposed method achieves a 10.7% improvement over existing state-of-the-art methods for table-based retrieval-augmented imputation.\\n\\nS3. The authors create a new benchmark for data imputation called mvBench.\", \"weaknesses\": \"W1. The rationale for the Tuple-Level Retrieval is unclear.\\nThe authors' claim that missing values can be imputed with just a few relevant tuples is not convincing, as it contradicts existing methodologies that emphasize the need for a large number of tuples to accurately identify substitute values. The authors should provide references or analysis to support their claims.\\n\\nW2. The explanation of the process for synthesizing the training dataset is insufficient.\\nThe authors do not specify how each augmentation operator is chosen and applied during the data synthesis. For example, concerning the replace operator, the authors should clarify the criteria for selecting words to replace and how synonyms are identified. \\n\\nW3. The experimental evaluation is limited.\\n- The authors should provide end-to-end data imputation accuracy using other retrievers such as Contriever, DPR-scale, BERT with MLM task, and Sudowoodo. This is necessary to demonstrate that RAI's retrieval improves data imputation performance compared to these methods.\\n- A more detailed explanation is needed to clarify why testing RATA on mvBench is difficult. The authors claim that applying RATA to mvBench is challenging due to varying definitions of \\\"relevant tables.\\\" However, this explanation lacks sufficient clarity. \\n- The authors should conduct an ablation study on the synthesized dataset.\\n- The authors should explore the sensitivity of different values of K on the retriever's performance to provide a more comprehensive evaluation.\", \"questions\": \"Please refer to W1, W2, and W3.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"proposes a RAG-based approach that retrieves relevant rows from the data lake and feeds the retrieved rows, together with the row for which imputation is required to an LLM to impute the missing value. The paper proposes a new benchmark dataset for training and evaluation of imputation in this context. It presents experimental results showing the proposed approaches improve retrieval of relevant rows, as well as the accuracy of the final imputation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Using data lakes for imputation is an interesting problem setting-\", \"The proposed method shows improvements in retrieval\", \"A new benchmark dataset is introduced\"], \"weaknesses\": [\"Technical novelty and contributions. There are two novel ideas in the paper. (1) using data lakes to perform imputation and (2) a new proposed row-level retrieval method for data imputation. However, neither of the two is particularly new and the paper makes limited contributions overall.\", \"Regarding (1), the paper\\u2019s claims on how it\\u2019s positioned compared to related works are somewhat contradictory. It mentions that there is a lack of redundancy in their setting, but redundancy is indeed what the model uses for imputation, at least based on the provided examples. The model looks for the same information stated in other available tables. Besides, the paper needs to motivate the problem setting, i.e., in what real-world application are there other data tables stating the same information? Finally, I don\\u2019t see any real difference in the problem setting compared with question answering over tables. That is, why can\\u2019t I just turn the row with the missing value into a natural language question (which is in fact what the method does) and use existing table QA methods to find the answer? The paper should better clarify what exactly is new in their problem formulation.\", \"Regarding (2), the training process uses very similar ideas to Sudowoodo. Specifically, the main contribution of the paper is the synthetic data generation, but the proposed ideas are very similar to the data augmentation procedure in Sudowoodo. The paper should clarify the technical differences between the methods.\", \"Experiments. Experiments don\\u2019t provide a thorough evaluation of the method. This includes missing baselines and ablation studies as well as insufficient description of the experimental procedure. As such, the experiments don\\u2019t show if the method is truly beneficial, and if so, what novel contribution led to the benefits.\", \"Why not include Sudowoodo in Table 4? Given the large discrepancy between the retrieval accuracy of BM25 (table 5) and its final imputation accuracy (table 4), presenting Table 4 with Sudowoodo can provide a better understanding of the method's contributions\", \"Using pre-trained embedding models (e.g., openai\\u2019s embedding model) is a common retrieval method. The paper should compare against embedding rows using pre-trained embedding models.\", \"Do Sudowoodo and RAI have the same base bert model? Are they both using an existing pre-trained model? Having the same starting point should help better understand differences between the two. The paper should also report the accuracy for the (pre-trained) BERT model before fine-tuning with their method\", \"The paper needs to provide a better and thorough description of the train/test splits used. Is the retriever training only done on the WT dataset? How different is the WT dataset from other datasets? Given that WT contains Wikipedia tables, can questions in CM and CP be answered based on the information in WT? It is important to understand if there is in fact any domain difference between the datasets\", \"I don\\u2019t understand why the datasets need to be manually labeled, nor why a human is expected to be able to provide correct labels (given that not all the information may be known by a person). Why not take a complete table, drop some values from some of the cells, and use the dropped values as ground-truth answers?\", \"The paper should use an entity linking method for retrieval. The paper states that such a method is used during training to find positive samples. Why not do that at test time? Given that the workload seems to be mainly finding information about an existing entity, perhaps entity linking should be used for retrieval (i.e., link entities, and at query time retrieve entities that are linked)\"], \"questions\": [\"Please provide answers to the questions raised above specifically:\", \"Clarify novelty in problem setting and solution\", \"Perform the requested experiments or discuss why they cannot/should not be done\", \"Provide the requested details about the experimental setting\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
EyTzNHoEyK
DrivAerML: High-Fidelity Computational Fluid Dynamics Dataset for Road-Car External Aerodynamics
[ "Neil Ashton", "Charles Mockett", "Marian Fuchs", "Louis Fliessbach", "Hendrik Hetmann", "Thilo Knacke", "Norbert Schönwald", "Vangelis Skaperdas", "Grigoris Fotiadis", "Astrid Walle", "Burkhard Hupertz", "Danielle C. Maddix", "Peter Yu" ]
Machine Learning (ML) has the potential to revolutionise the field of automotive aerodynamics, enabling split-second flow predictions early in the design process. However, the lack of open-source training data for realistic road cars, using high-fidelity CFD methods, represents a barrier to their development. To address this, a high-fidelity open-source (CC-BY-SA) public dataset for automotive aerodynamics has been generated, based on 500 parametrically morphed variants of the widely-used DrivAer notchback generic vehicle. Mesh generation and scale-resolving CFD was executed using consistent and validated automatic workflows representative of the industrial state-of-the-art. Geometries and rich aerodynamic data are published in open-source formats. To our knowledge, this is the first large, public-domain dataset for complex automotive configurations generated using high-fidelity CFD.
[ "CFD", "automotive", "ML", "drivaer", "dataset" ]
Reject
https://openreview.net/pdf?id=EyTzNHoEyK
https://openreview.net/forum?id=EyTzNHoEyK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sOEYUmnsjV", "s9kxZVkwny", "pn4JvhWLa4", "cA8bze6BXa", "WSodYuRyI4", "OqShas098q", "1SXrJDFgk6" ], "note_type": [ "official_review", "official_comment", "official_review", "official_review", "official_review", "meta_review", "decision" ], "note_created": [ 1730353820607, 1732873134384, 1730525988425, 1730908616832, 1730450182173, 1734670206840, 1737523590733 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3700/Reviewer_EGAc" ], [ "ICLR.cc/2025/Conference/Submission3700/Authors" ], [ "ICLR.cc/2025/Conference/Submission3700/Reviewer_Wopn" ], [ "ICLR.cc/2025/Conference/Submission3700/Reviewer_hHNL" ], [ "ICLR.cc/2025/Conference/Submission3700/Reviewer_MyEC" ], [ "ICLR.cc/2025/Conference/Submission3700/Area_Chair_uJQp" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"This is a solid and valuable contribution. The authors present a large-scale public dataset in the automotive domain and provide detailed information on the dataset generation process. However, the paper reads more like a technical report than an academic paper, and there appears to be a lack of innovative contributions beyond the dataset itself. While the dataset generation is well-documented, adding more baseline testing would strengthen the paper by demonstrating dataset quality and usability, as done in related datasets like PDEBench [1], EAGLE [2], and DrivAerNet++ [3].\\n\\n[1] PDEBench: An Extensive Benchmark for Scientific Machine Learning\\n\\n[2] EAGLE: Large-Scale Learning of Turbulent Fluid Dynamics with Mesh Transformers\\n\\n[3] DrivAerNet++: A Large-Scale Multimodal Car Dataset with Computational Fluid Dynamics Simulations and Deep Learning Benchmarks\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors produced a high-quality looking high-fidelity public dataset from the automotive domain with a solid amount of work.\", \"weaknesses\": \"1. The paper is written in a style that is more characteristic of a technical report than an academic paper.\\n\\n2. Lack of comprehensive baseline testing to showcase dataset utility and model performance.\", \"questions\": \"1. As far as I know, there are already some open-source datasets, for example, DrivAerNet++ [1], what is the difference between your work and their work?\\n\\n[1] DrivAerNet++: A Large-Scale Multimodal Car Dataset with Computational Fluid Dynamics Simulations and Deep Learning Benchmarks\\n\\n2. As a benchmark, how do you evaluate the performance of the model? As a benchmark dataset, it\\u2019s important to detail the evaluation metrics and testing procedures used to assess model performance. Consider specifying which metrics were used, how the dataset was split for training/validation/testing, and any comparisons made with baseline methods. This would help highlight how the dataset enables meaningful performance comparisons across models.\\n\\n3. When defining a \\\"good\\\" public dataset? It would be useful to discuss qualities beyond size alone. For instance, diversity in scenes, real-world relevance, and usability in various model architectures are important factors. Additionally, describing any distribution plans and dataset accessibility options would be valuable in demonstrating the impact and reach of the dataset. This could also increase community engagement and foster broader adoption.\\n\\n4. What are the dataset requirements for NNs? Since this dataset involves a large number of inputs, guiding model requirements would be useful. Consider outlining preferred model architectures or input-output formats that would be suitable for this data, and discuss any preprocessing or dimensionality reduction techniques recommended for handling the high input volume. This would make it easier for researchers to integrate the dataset into existing models.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for the review and apologies for the late reply.\\n\\nWeaknesses\\nFor points 1 & 2 we apologize and will fix it in the final paper version. For point 3, we accept that we have assumed that readers of this paper will already be interested in training ML models for CFD and will know about the industrial and academia tasks but I accept we should not assume this and will be sure to add a few more clarifying points on exactly how it could be done. \\n\\nPoint 4 is a crucial one. We purposely have created this paper as very much the \\u2018data\\u2019 part of it, with only minimal ML evaluation to illustrate how it could be used. We believe this still has merit given how detailed the paper is on the dataset generation and validation. We are working on a separate paper with other co-authors to more formally present a detailed ML evaluation. \\n\\nQuestions\", \"1\": \"Could you please clarify exactly what you mean by baseline geometries can be further used in other experiments? For the size of the dataset, we have shown in the ML evaluation that using the full 500 cases gives promising accuracy and thus we believe this is a suitable size.\\n2. That is a good point and we will add a section comparing the size/fidelity of the dataset versus others\\n3. the test-case is the DrivAer road-car geometry that is extremely common in the automotive aerodynamics community (for example it is used within the AutoCFD workshop series) and the specification of this test-case is the same in this dataset as that workshop series.\"}", "{\"summary\": \"This paper has successfully created a high-fidelity, open-source CFD dataset based on a parametric variation of the DrivAer vehicle model, addressing a critical gap in available training data for ML applications in this domain. The paper thoroughly details the dataset generation process, CFD methodologies, and the potential of the dataset for ML model training and evaluation. The work is well-structured, the results are promising, and the dataset's release under a permissive license is commendable, fostering further research and development. The paper's limitations are acknowledged, and suggestions for future work are provided, indicating a clear path for ongoing research in this area. Overall, the paper is a valuable addition to the literature and would benefit the automotive aerodynamics community by enabling more accurate and efficient design optimization studies.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper introduces the DrivAerML dataset, which stands out for its originality in several aspects. Firstly, it provides one of the first large-scale, high-fidelity CFD datasets for complex automotive aerodynamics geometries, addressing a significant gap in the availability of open-source training data for ML models in this field. The use of 500 parametrically morphed variants of the DrivAer notchback generic vehicle represents a creative combination of existing ideas, expanding the dataset's applicability beyond a single geometry. This approach not only enhances the diversity of the dataset but also simulates real-world automotive design variations, which is a novel contribution to the field.\\nThe quality of the dataset itself is exceptional, as it is generated using consistent and validated automatic workflows that are representative of industrial state-of-the-art practices. The use of hybrid RANS-LES methods for CFD simulations ensures that the data is of the highest fidelity, which is crucial for the development and testing of accurate ML models. The dataset's comprehensive nature, including full flow-field data, surface data, and application-relevant quantities, further enhances its quality and utility for researchers and practitioners.\\nThe paper is well-structured and clearly articulated. The authors effectively communicate the motivation behind the dataset, its construction, and its potential applications. The clarity of the paper is further enhanced by the detailed descriptions of the CFD methods, the workflow for dataset generation, and the validation against experimental data. The inclusion of visual aids, such as figures and tables, aids in understanding the complexity and diversity of the dataset. Additionally, the paper clearly outlines the structure and contents of the dataset, making it accessible for potential users.\\nThe significance of this paper lies in its potential to revolutionize automotive aerodynamics by enabling faster and more cost-effective fluid flow predictions during the design process. By providing a high-fidelity dataset, the authors empower the research community to develop and test ML models that can significantly accelerate design optimization studies. The dataset's open-source nature and permissive licensing (CC-BY-SA) ensure widespread accessibility and encourage collaborative innovation across academia and industry. The paper's contribution to removing limitations from prior results, such as the lack of high-quality, public-domain CFD data, is significant and has the potential to inspire new research directions and applications in automotive aerodynamics and beyond.\", \"weaknesses\": \"While the paper provides a comparison of the CFD methodology against experimental data for the baseline geometry, it could benefit from a more extensive validation across a broader range of geometries within the dataset. Actionable Insight: The authors could consider validating the CFD results against experimental data for a subset of the 500 parametrically varied geometries to ensure the dataset's accuracy across different configurations.\\nThe paper mentions the use of statistical quality control in the automated workflows but does not delve into the specifics of data preprocessing steps. Actionable Insight: Providing a detailed account of data preprocessing, including any normalization or filtering applied to the CFD outputs, would enhance the transparency and reproducibility of the dataset.\\nThe dataset focuses on force and moment coefficients, which are crucial, but other aerodynamics metrics such as drag polars or pressure distribution details could provide a more comprehensive understanding of the flow field. Actionable Insight: Expanding the dataset to include a wider array of aerodynamics metrics could increase its utility for researchers interested in specific aspects of vehicle aerodynamics.\\nThe paper conducts a preliminary ML evaluation using a GNN approach but does not explore the performance of other ML models or deeper analysis of model limitations on the dataset. Actionable Insight: The authors could experiment with a variety of ML models and hyperparameter tuning to provide a more thorough evaluation of the dataset's predictive capabilities and to identify any patterns or biases in the data that could affect ML model performance.\\nGiven the rapid advancements in CFD and ML, the dataset might become outdated. Actionable Insight: The authors should consider establishing a protocol for regular updates to the dataset, incorporating new geometries, boundary conditions, and possibly even results from more advanced CFD simulations as they become feasible.\\nThe paper does not discuss plans for community engagement or feedback mechanisms to improve the dataset post-release. Actionable Insight: Establishing a forum or platform for users to provide feedback, suggest improvements, or contribute additional data could foster a collaborative environment around the dataset and enhance its long-term value.\\nAlthough the dataset is synthetic, it is derived from real-world automotive design principles. Actionable Insight: The authors might consider an ethical review process to address any potential concerns, even if they are perceived as minor, to set a precedent for responsible data handling in the field.\", \"questions\": \"The paper mentions the use of statistical quality control but does not detail the data preprocessing steps. Could you provide more transparency on the preprocessing pipeline applied to the CFD outputs?\\nThe dataset primarily includes force and moment coefficients. Are there plans to include additional metrics such as drag polars or detailed pressure distributions in future updates?\\nThe ML evaluation section focuses on a GNN approach. Have you considered the performance of other ML models on this dataset, and what were the outcomes?\\nGiven the rapid evolution of CFD and ML, how do you plan to keep the dataset relevant and up-to-date?\\nAre there any mechanisms in place for the community to provide feedback or contribute to the dataset post-release?\\nThe paper acknowledges certain limitations. Could you provide a roadmap on how you intend to address these limitations in future work?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper generates a high-fidelity open-source public dataset, named DrivAerML, for automotive aerodynamics. This dataset consists of 500 variants of the baseline geometry, covering the main features seen on this category of road vehicle. Hybrid RANS-LES is used which is the highest-fidelity scale-resolving CFD approach routinely deployed by the automotive industry, to ensure best possible correlation to experimental data.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"It is stated that DrivAerML is the first large, open-source ML training dataset comprising high-fidelity CFD data for complex automotive aerodynamics geometries. The dataset primarily targets data-driven surrogate ML approaches. The dataset may also useful for other ML approaches, or even for purposes beyond the ML field. The generation of high-fidelity CFD dataset and the contents and structure of the dataset are described. A supplement material is detailed at the end.\", \"weaknesses\": \"1.\\tThe section numbers in the paper organization paragraph are missing.\\n2.\\tThe paper format should be double-checked. The reference format is wrong.\\n3.\\tThe generation process of the dataset is detailed. However, how does this dataset can be used to facilitate the industrial tasks and academia tasks could be further elaborated. For example, what\\u2019s the purpose in Section E.1. The description of the task is not clear. What\\u2019s the benefits of this ML evaluation process? What conclusion can we draw from Section E.1 ?\\n4.\\tThe dataset was created for the development and testing of machine learning methods for Computational Fluid Dynamics and automotive aerodynamics. The baseline ML approaches are not thoroughly tested and listed in the paper. It is recommended to further enrich the baseline models and their corresponding testing performance. These benchmarks and performance info could be hosed on a separated website.\", \"questions\": \"1.\\tBaseline geometries are provided at the beginning of the paper, but how these baseline geometries can be further used in other experiments? Whether the size of the dataset is enough for training ML models?\\n2.\\tWhether the author could provide more detailed comparison between similar datasets?\\n3.\\tIt is emphasized in the paper the dataset could be a challenging test-case at future conferences/workshops to benchmark the performance of different ML approaches for an open-source automotive dataset. This dataset is not very common. What\\u2019s the so-called test-case for this dataset? How should we define the test-cases? It is not very clear how this dataset is beneficial for academia.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents an open-source dataset of high-fidelity computational fluid dynamics (CFD) simulations focused on the aerodynamics of one of the state-of-the-art generic vehicle models DrivAer, with varations in model dimensions. Through the incorporation of morphing boxes this dataset encompasses a wide range of vehicle structures. As a result it presents a valuable asset for developing data-driven approaches of machine learning applications.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Accepted challenging benchmark datasets is a current hurdle for the comparison and thus broader usage of SciML. The proposed data set could adress this challenge since it covers a multitude of different vehicle structures due to a large amount of parameters of the morphing boxes. Moreover, a comparison to experimental data from AutoCFD to validate the accuarcy of the simulation is included.\", \"weaknesses\": [\"In terms of language the paper is generally well written, however there are some imprecise wordings. This includes the choice of geometry that is partly based on \\\"expectations of industrial feasibility\\\" and parameter ranges \\\"based on engineering judgement\\\". More information or a brief discussion is should be added, e.g. to add more concrete criteria for what constitutes \\\"industrial feasibility\\\" or to explain the specific engineering principles that informed their judgment on parameter ranges.\", \"The missing reference numbers to the corresponding sections in e.g. chapter \\\"Objectives and Main Contributions\\\" and \\\"CFD Methods\\\", as well as the reference to the general appendix (named \\\"SI\\\") instead of explicit chapters throughout the paper impede the understanding of the presented information.\", \"The low-fidelity dataset \\\"DrivAerNet\\\" is mentioned. Since the datasets aim at similar uasage a more thorough discussion should be provided. In best case a validation that the new dataset can improve data based models. Moreover, a discussion on further related data sets would help to point out the specific benefits of the proposed data set.\", \"The paper is not self-contained. The appendix contains information that is in fact necessary to undestand the data set and possible usage scenarios. This includes the validation of the method that references the underlying experiment cases and parts of the comparison. This is also the case for the sampling of the parameter space (in the next bullet point). This information is required to understand the data set and potential usage scenarios should thus be included in the paper.\", \"Even with information from the appendix crucial information for a data set remains unclear. E.g. What is the number of sampled points for each parameter and thus how dense are the parameters sampled? The appendix shows a distribution for two parameters with maybe something between 100 and 200 data points (exakt number not given). Thus there a still 14 dimensions unsampeled, but only 400-500 points left. This seems like a very sparse data set. MeshGraphNets, which promise generalizybility w.r.t geometry, are mentioned. However, sampling seems (from the above estimation) too sparse to achieve this generalization. This should be discussed. Also further potential applications should be discussed w.r.t the size of the data set. Accordingly, the \\\"ML Evaluation\\\" section should be detailed, e.g by\", \"a complete breakdown of the number of samples for each parameter.\", \"a discussion on how the sparsity of the dataset might affect its utility for different machine learning tasks, particularly in relation to geometric generalization\", \"more details on the ML evaluation, including the specific architecture used, loss function formulation, and a more critical analysis of the results in light of the dataset's characteristics.\", \"the authors thoughts on potential future work to address the dataset's limitations, such as conducting a sensitivity analysis to inform more targeted sampling.\"], \"questions\": \"How does the data set relate to further available data sets in the field of SciML?\\n\\nWill the data set size be increased in the future?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This work proposes a dataset for automotive aerodynamics. It aims to support the development of ML-based computational fluid dynamics methods to accelerate flow prediction during the vehicle design process. The main benefit of the work is that it seems to be the first dataset of its type. The reviewers also raised several weaknesses, including the paper's writing, insufficient validation, and lack of additional metrics. These concerns remained largely not addressed during the review process, and thus, this work is not ready for publication.\", \"additional_comments_on_reviewer_discussion\": \"The authors did not participate in the rebuttal except for responding to one review.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
Ey8KcabBpB
EMOS: Embodiment-aware Heterogeneous Multi-robot Operating System with LLM Agents
[ "Junting Chen", "Checheng Yu", "Xunzhe Zhou", "Tianqi Xu", "Yao Mu", "Mengkang Hu", "Wenqi Shao", "Yikai Wang", "Guohao Li", "Lin Shao" ]
Heterogeneous multi-robot systems (HMRS) have emerged as a powerful ap- proach for tackling complex tasks that single robots cannot manage alone. Current large-language-model-based multi-agent systems (LLM-based MAS) have shown success in areas like software development and operating systems, but applying these systems to robot control presents unique challenges. In particular, the ca- pabilities of each agent in a multi-robot system are inherently tied to the physical composition of the robots, rather than predefined roles. To address this issue, we introduce a novel multi-agent framework designed to enable effective collab- oration among heterogeneous robots with varying embodiments and capabilities, along with a new benchmark named Habitat-MAS. One of our key designs is Robot Resume: Instead of adopting human-designed role play, we propose a self- prompted approach, where agents comprehend robot URDF files and call robot kinematics tools to generate descriptions of their physics capabilities to guide their behavior in task planning and action execution. The Habitat-MAS bench- mark is designed to assess how a multi-agent framework handles tasks that require embodiment-aware reasoning, which includes 1) manipulation, 2) perception, 3) navigation, and 4) comprehensive multi-floor object rearrangement. The experi- mental results indicate that the robot’s resume and the hierarchical design of our multi-agent system are essential for the effective operation of the heterogeneous multi-robot system within this intricate problem context.
[ "Embodied Artificial Intelligence", "LLM Multi-agent System", "Multi-robot System", "Task Planning" ]
Accept (Poster)
https://openreview.net/pdf?id=Ey8KcabBpB
https://openreview.net/forum?id=Ey8KcabBpB
ICLR.cc/2025/Conference
2025
{ "note_id": [ "u2I7Inukpj", "tfArh98q1Y", "sMIJ637vgM", "rNyfJbqsHG", "rKQxgSf3Sb", "qb5XvNXBeX", "pBNdRaRA1M", "oy6RVSTY3S", "jkY1lR1ktt", "gvgPxNGerP", "eXWV49vXQu", "eOtjjoSki0", "cItGQjpDRT", "b0M7Wzoi0E", "aCcYsOw1Ai", "VwVw0VToRa", "SL3vClG9ku", "Rx229jaMjI", "RNKPsNxBqv", "OqCRw3Jh8B", "O2PrdBwRbG", "Msi2UNiFnf", "MsGrELZouv", "JRIwG2IpvS", "J2yOYgzbkb", "HtStNp0kNP", "FM0C3leDw5", "EvxHvfVKdX", "AsVTZ7uTnk", "8dGEnDPZ1B", "8b3OBa926b", "5pEp5iLm9m", "5iRRd4SBhg", "2kAXpyReb4", "1CojMDZ1Vs", "0czwL9lxBV" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730350244067, 1732633791098, 1732117720708, 1732367508307, 1732201683099, 1732541962473, 1732115939730, 1732699139006, 1732542383904, 1732209340165, 1730572076622, 1732541959916, 1732599780497, 1732369335487, 1732371528295, 1732211344634, 1732370844248, 1732204484618, 1730516827325, 1732542084757, 1732367625219, 1732512516461, 1732491384569, 1732259697928, 1732543172691, 1729382107938, 1732105795444, 1732370464823, 1734432307524, 1737523595113, 1732209805600, 1732508818413, 1733029713479, 1732259998569, 1732209137240, 1732205687658 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3753/Reviewer_jjgY" ], [ "ICLR.cc/2025/Conference/Submission3753/Reviewer_79K2" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Reviewer_5WcA" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Reviewer_79K2" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Reviewer_79K2" ], [ "ICLR.cc/2025/Conference/Submission3753/Reviewer_5WcA" ], [ "ICLR.cc/2025/Conference/Submission3753/Reviewer_79K2" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Reviewer_xZUU" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Area_Chair_v3Tx" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Reviewer_jjgY" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Reviewer_79K2" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ], [ "ICLR.cc/2025/Conference/Submission3753/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents an interesting approach to LLM-based multi-robot systems where robot roles are not predefined but are instead determined dynamically based on the robots' physical capabilities. The authors introduce the concept of a \\\"robot resume,\\\" generated from robot URDF files and kinematic data, to guide task planning and integrate role flexibility within the model.\\n\\nThis concept is very compelling; however, my main concern is the additional degree(s) of freedom introduced into the decision-making and planning. In systems with predefined/assumed heterogeneous roles, the answer to \\u201cwho does what\\u201d is partially predetermined, simplifying the planning process. Here, this isn\\u2019t the case, and the current centralized hierarchical LLM-based planning and task assignment system becomes significantly constrained. With the current design, the centralized planner and task assignment modules will face considerable burden in optimizing plans and assignments, particularly as domain complexity increases or when dealing with similar or more universally capable robots. I suggest comparing your approach against systems with predefined roles to illustrate any potential advantages or limitations. Additionally, you can provide a quantitative analysis of how your system's performance scales with increasing numbers of robots, robot classes, or task complexity (see [1] for example).\\n\\n[1] Seraj et al. \\\"Learning efficient diverse communication for cooperative heterogeneous teaming\\\", AAMAS 2022\\n\\nOne potential solution to mitigate this issue could involve incorporating prior knowledge to streamline planning and decision-making. For example, a standard quadcopter lacks object manipulation capabilities, so it can be automatically excluded from relevant tasks, reducing the computational load. Such prior knowledge could be represented as a context/class vector for a contextual classification. Additionally, other factors such as robot availability (i.e., whether a robot is still executing a prior task) could contribute to a truly asynchronous multi-robot system.\", \"i_also_have_a_few_questions\": \"- Can you discuss the modularity of this approach? For instance, can you describe the process for adding a new capability like battery life to the robot resume and how it would be integrated into the task planning and allocation process. This would give a clearer picture of the system's modularity and extensibility.\\n- Do LLM-based planners consider assignment risks, and if so, how can these risks be leveraged in the decision-making process?\\n- It remains unclear how the central planner optimizes \\\"which robot does what.\\\" For example, how does the system handle scenarios with (1) multiple equally capable robots, (2) differently capable robots that can all execute a task, or (3) universally capable robots? Does the LLM-based \\u201cdiscussion\\u201d incorporate strategic negotiation or cognitive hierarchy mechanisms (e.g., k-level thinking) to optimize plans and assignments? Can you provide a specific example scenario for each of these cases and explain how the system would handle the allocation in each case?\\n- Given the dynamic nature of the environment, assuming full observability is unrealistic. However, it is also unclear how partial observability is handled in this system. Can you discuss how your system might be adapted to handle partial observability, and what challenges you anticipate in doing so?\\n\\nAdditionally, I suggest including a section on Heterogeneous Multi-Robot Systems literature rather than the current Multi-Robot Systems section. Given the context of this work, it would be more impactful to emphasize on the heterogeneity in collaborative multi-robot teams, categorizing and introducing previous approaches such as MARL (heterogeneous multi-robot communication and coordination), Multi-Agent Apprenticeship Learning (learning heterogeneous multi-robot policies from human demonstrations), trait-based heterogenous multi-robot planning, and other non-LLM-based strategies. Below are a list of papers to begin with:\\n\\n[1] \\\"Learning efficient diverse communication for cooperative heterogeneous teaming\\\", AAMAS 2022\\n\\n[2] \\\"Heterogeneous Multi-Robot Reinforcement Learning\\\", AAMAS 2022\\n\\n[3] \\\"STRATA: A Unified Framework for Task Assignments in Large Teams of Heterogeneous Agents\\\", JAAMAS 2020\\n\\n[4] \\\"Mixed-initiative multiagent apprenticeship learning for human training of robot teams\\\", NeurIPS 2023\\n\\n[5] \\\"Heterogeneous policy networks for composite robot team communication and coordination\\\", T-RO 2024\\n\\nLastly, a minor but important point: the paper contains numerous grammatical and verbal errors. A thorough review is recommended. \\n\\nOverall, I'm happy with the contributions made in this paper and vote for a weak accept. I'd be happy to raise my score given mine and my fellow reviewers' comments are adequately addressed. Thanks!\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"-- See above.\", \"weaknesses\": \"-- See above\", \"questions\": \"-- See above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the author's effort. The additional experiments solve my confusion. I do not have any problem.\"}", "{\"title\": \"Official Comment by Authors (part 1)\", \"comment\": \"We sincerely appreciate the reviewer's thorough and insightful feedback on our paper. The reviewer has provided valuable comments on the strengths of our approach, particularly highlighting the innovative concept of \\\"robot resume\\\" for dynamic role determination in LLM-based multi-robot systems. We are grateful for their recognition of the compelling nature of our work. Since there are a lot of questions and discussions from the reviewer\\u2019s feedback, and the number of characters in a comment box is limited, we will respond in multiple parts:\\n\\n### **Concern about degree of freedom for the system**\\n\\nWe appreciate the reviewer's insightful question about the additional degrees of freedom introduced in our decision-making and planning process. Specifically, we agree with the reviewer\\u2019s point of \\u201c \\u2019who does what\\u2019 is partially predetermined, simplifying the planning\\u201d. In fact, we follow the conclusion in [1] which explored four different LLM multi-agent architectures for group discussion. Their conclusion is that the centralized decision making process greatly helps all agents to make agreement and the discussion \\u201cconverges\\u201d to a concrete plan, i.e. sequence of sub-tasks and task assignment. For a constrained-free LLM multi-agent discussion, the system has a high probability of divergence, in which no agreement can be made or the discussion falls into a loop or mode collapse. We do acknowledge the limitation of this constrained task planning here, trading a bit of the freedom for the convergence and performance. We will add the section of \\u201cAcknowledgment for Limitation\\u201d to discuss the potential limitations in the design for the extensive systems. \\n\\n### **Suggestion on comparing our approach with other methods with predefined roles**\\n\\nWe are really sorry that the writeup has led to some confusion. In the ablation study itself, we have already compared our framework with other similar baseline methods. For example, \\u00a0w/o. Robot resume\\u00a0refers to\\u00a0Meta GPT[2] and\\u00a0w/o. Discussion\\u00a0refers to\\u00a0CMAS[1]. It is worth mentioning that Meta GPT[2] is a role-playing method that assigns different roles (managers, developers, test engineers) manually to LLM agents as we discussed in the section 4.4 in the submission, lines [475 - 478]. Our method\\u2019s success rate, sub-goal success rate, and time efficiency all constantly outperform the role-playing method with human-created profiles. We really thank the reviewers for pointing out this ambiguity, and we will revise the draft as soon as possible. \\n\\n### **Question of modularity and experiment on adding battery life**\\n\\nThank you for your thoughtful question about the modularity and extensibility of our system. Our implementation leverages modularity through a `RobotResume` Python class, which stores robot capabilities in a JSON file. Different capabilities are inferred via separate modules, such as manipulation, perception, and navigation, each utilizing specific information like URDF files and camera parameters. To add new capabilities, such as battery life, we update the `defaults.py` file with the relevant descriptions and regenerate the robot resumes in JSON format. The new capability can be integrated into task planning by refining the agents' prompts to focus on the added feature, demonstrating the modularity and extensibility of our system.\\n\\nAt the reviewer's request, and to also better illustrate the extensibility of the robot resume, we conducted experiments by adding battery life designed based on real-world conditions as a new capability. We tested the system on 5 episodes of perception tasks and 5 episodes of manipulation tasks. The results showed a 100% success rate in avoiding the allocation of robots whose task execution time would exceed their battery life, demonstrating the effective integration of the new capability into the task planning and allocation process. The result is expected thanks to the strong prior of common sense reasoning stored in LLMs. The chat history of the experiment is uploaded to our [repository](https://github.com/EMOS-project/EMOS-project.github.io/blob/main/rebuttal/exp_battery_life.zip)\\n\\nConsidering the limited time for the rebuttal and limited comment length, this just tries to address part of the concerns and questions. We will upload the updated draft and the rest parts of the rebuttal comments as soon as possible. We really thank the reviewer's understanding. \\n\\n[1] Chen, Yongchao, Jacob Arkin, Yang Zhang, Nicholas Roy, and Chuchu Fan. \\\"Scalable multi-robot collaboration with large language models: Centralized or decentralized systems?.\\\" In 2024 IEEE International Conference on Robotics and Automation (ICRA), pp. 4311-4317. IEEE, 2024.\\n\\n[2] Sirui Hong, undefined., et al, \\\"MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework,\\\" in The Twelfth International Conference on Learning Representations, 2024.\"}", "{\"title\": \"Reply to question about extension to game relationship\", \"comment\": [\"We really appreciate the in-depth discussion regarding the potential extension of our framework to scenarios involving game relationships among heterogeneous robots. Firstly, we are really sorry that we the authors are not experts in game theory or game relationships. Please feel free to correct us if we have inaccurate expressions. Here we provide our speculations about the potential challenges and problems in our proposed methods for adaptation:\", \"Group-based interaction: Introducing groups may increase the complexity of agent policies. For example, agents should have different strategies for intra-group communication and inter-group communication depending on various game relationships and goals. Naively, we can transplant our framework to this scenario with different group system prompts. However, it is not clear to us how effective this naive transplantation could be.\", \"Layered communication graph: A more complex communication graph also means higher computational complexity and a greater possibility of communication failure. The graph used in our current framework is a fully connected graph with a star topology. It will require some engineering efforts to transplant. Besides, sophisticated communication graph design and increase of agents in the network could lead to similar problems people facing in Computer Networks, which is beyond our knowledge boundary.\"]}", "{\"comment\": \"We really thank the reviewer for the time and efforts invested in this thoughtful evaluation of our paper. We would also appreciate the recognition of our work's strengths, particularly the acknowledgment of the innovative use of robot resumes generated from URDF files to enhance inter-robot communication and the effective leveraging of LLMs for heterogeneous multi-robot collaboration challenges. Apart from the positive comments, we would like to address the concerns and questions raised by the reviewer:\\n\\n### **Comparison with similar frameworks and role-playing methods**\\n\\nWe are really sorry that the writeup has led to some confusion. In the ablation study itself, we have already compared our framework with other similar baseline methods. For example, \\u00a0**w/o. Robot resume**\\u00a0refers to\\u00a0**Meta GPT**[1] and\\u00a0**w/o. Discussion**\\u00a0refers to\\u00a0**CMAS**[2]. It is worth mentioning that Meta GPT[1] is a role-playing method that assigns different roles (managers, developers, test engineers) manually to LLM agents as we discussed in the section 4.4 in the submission, lines [475 - 478]. Our method\\u2019s success rate, sub-goal success rate, and time efficiency all constantly outperform the role-playing method with human-created profiles. We really thank the reviewers for pointing out this ambiguity, and we will revise the draft as soon as possible.\\n\\n### **Performance on various scenarios and different robot configurations**\\n\\nWe greatly appreciate this question, though we feel that the scope of this question could be somewhat broad.\\n\\nFirst, we highlight that our Habitat-MAS benchmark is one of the most diverse and comprehensive datasets for involving 1) large-scale multi-floor multi-room indoor scenarios from Matterport3D [3] and HSSD [4]; 2) and diverse types of robots including drones, wheeled robot, legged robot, and different types of arms including revolute arms and prismatic arm, as we introduced in paper section 4.1, lines [372-379]. We believe our problem settings are representative and can cover a large ratio of indoor multi-robot scenarios and robot configurations that are **on the commercial market.**\\n\\nYet we agree that this setting is still not the general scenarios nor general multi-robot configurations. However, due to the limited time for the rebuttal session and massive engineering efforts in designing new environments, tasks and robot teams, we cannot provide experimental results to answer this question. We also believe that this problem can be better addressed by large corporations and institutes with their resources and labor.\\n\\n### **Question about extension to game relationship**\\n\\nThank you for your insightful question regarding the potential extension of our framework to scenarios involving game relationships. While our current implementation focuses on a fully interconnected, collaborative robot system, we recognize the importance of addressing more complex scenarios.\\n\\nTo extend our framework to game-theoretic situations, we propose two potential approaches:\\n - Introducing a group concept: We could incorporate the notion of groups or teams within the multi-robot system. This would allow us to model gaming relationships between different groups of interests, similar to scenarios found in competitive robotics applications like RoboMaster or robot soccer.\\n- Implementing a more complex communication graph: Instead of a fully connected graph, we could design a layered communication topology. This approach would better represent scenarios where information flow is restricted or strategic, as often seen in game-theoretic contexts.\\n\\nHowever, we believe that the fundamental principles of our framework, particularly the embodiment-aware reasoning, could be adapted to handle these more complex interaction dynamics. \\n\\n[1] Sirui Hong, undefined., et al, \\\"MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework,\\\" in The Twelfth International Conference on Learning Representations, 2024.\\n\\n[2] Chen, Yongchao, Jacob Arkin, Yang Zhang, Nicholas Roy, and Chuchu Fan. \\\"Scalable multi-robot collaboration with large language models: Centralized or decentralized systems?.\\\" In 2024 IEEE International Conference on Robotics and Automation (ICRA), pp. 4311-4317. IEEE, 2024.\\n\\n[3] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva,\\nShuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from rgb-d data in indoor\\nenvironments. International Conference on 3D Vision (3DV), 2017\\n\\n[4] Khanna, Mukul, Yongsen Mao, Hanxiao Jiang, Sanjay Haresh, Brennan Shacklett, Dhruv Batra, Alexander Clegg, Eric Undersander, Angel X. Chang, and Manolis Savva. \\\"Habitat synthetic scenes dataset (hssd-200): An analysis of 3d scene scale and realism tradeoffs for objectgoal navigation.\\\" In\\u00a0*Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.* 2024.\"}", "{\"comment\": \"We are really grateful to the reviewer's response and kind confirmation. And we truly appreciate the time and effort the reviewer has dedicated to thoroughly reviewing our work and engaging in in-depth discussions with us during rebuttal.\"}", "{\"title\": \"Rebuttal Comment\", \"comment\": \"We really appreciate the reviewer for the effort in review and advice. And we do apologize for not clarifying the papers\\u2019 focus enough.\\n\\n### **Regarding to Detailed Discription**\\n\\nAlthough we have mentioned a big topic in this paper, as we described in the introduction (section 1, line 100), our focus is the _embodiment-aware task planning_ using the LLM multi-agent methodology, which is related to the high-level task planning. For detailed high-level decision making process in our framework, we have illustrated in section 3.4, line 302. \\n\\nWe do not prominently emphasize the contribution of the entire system in low-level motion planning, as we have mentioned in section 4.1, line 390, the collision has been disabled with Pybullet physics simulation in this benchmark. \\nBesides, we acknowledge the detailed description is not discussed enough. Thus we add section _B.4 Detailed Implementation of Robot Low-level Control in Benchmark_ for clarity.\\n\\n### **Regarding to Communication Graph**\\nWe really thank you for your suggestion. We discribe the communication graph and discussion process in section 3.4 but without visualization.\\nAnd we have added the communication graph in the appendix _A.2 MAS Communication Graph_ for graph visualization and detailed discription to address this concern. \\n\\n### **Regarding to Error in Video**\\nWe are really sorry for the confusion caused by the video. For the drone at the 16th second in the video, in fact the drone passed through the room door following the routine, which can be seen from this screenshot that drone passes through the door. Please refer to this picture [CLICK](https://github.com/EMOS-project/EMOS-project.github.io/blob/main/rebuttal/drone_pass_door_bbox.png), in which the door that drone flying through can be clearly seen. We\\u2019ve rechecked the demo videos on the website and confirmed that all the robots in the video behave correctly.\\n\\nWe sincerely express our appreciation for reviewer's valuable advice, and we'll update the draft according to those suggestions and questions as soon as possible. We do hope the refinement and explaination can win our reviewer back and look forward to further discussion.\"}", "{\"title\": \"Update of Revision V2\", \"comment\": \"Dear reviewers,\\n\\nWe greatly appreciate all the constructive comments from the reviewers. We have revised our paper according to the suggestions. Based on the revision V1, we have these extra updates:\\n- We added `C.6 EXTRA EXPERIMENTS ON SCALABILITY OF EMOS`. Thanks to insightful questions by `Reviewer 79K2` and `Reviewer jjgY` about the scalability of the EMOS framework, we had helpful discussions about this issue. In this revision, we summarized the extra experiments and discussions about scalability to a new section in appendix to discuss this issue. \\n\\n\\nWe have highlighted all the revised/added parts with blue color for the reviewers' convenience. We will remove the colorization in the formal version. We are also looking forward to reviewers' feedback on this revision.\"}", "{\"title\": \"Update on Scalability Experiments' Results\", \"comment\": \"We really thank the reviewer's insightful question on the scalability of the current framework. As promised before, we are now updating the results of the scalability experiments:\\n\\n### **Table 1: Scalability with Increasing Agent Numbers**\\n\\n| Number of Robots | Success Rate (%) | Token Usage |\\n|-------------------|------------------|-------------|\\n| 2 | 80% | 48779 |\\n| 4 | 60% | 73202 |\\n| 6 | 70% | 93252 |\\n| 10 | 50% | 151952 |\\n\\nIn the first experiments of scaling up the robot number, we found that as the number scales up, the multi-agent system will face problems like hallucinations (in the setting of 10 agents) and the average success rate will decline. This is as expected since the hallucination problem in LLM is prevalent and it becomes worse with the increase of context length. This could be alleviated with more powerful LLM models as we have witnessed amazing progress in LLM model capability in the past year. Or designs like hierarchical communication with smaller sub-group discussions and larger group aggregation (delegate meeting) could help to solve the scalability problem in multi-agent discussion.\\n\\n\\n### **Table 2: Scalability with Increasing Task Complexity**\\n\\n| Number of Objects | Success Rate (%) | Token Usage |\\n|--------------------|------------------|-------------|\\n| 1 | 90% | 25778 |\\n| 2 | 80% | 50005 |\\n| 3 | 80% | 87668 |\\n| 5 | 70% | 197485 |\\n\\nIn the second experiment, which involved scaling up task complexity, we found that our system demonstrates robustness to a certain extent. As the number of objects increases\\u2014representing greater task complexity\\u2014the system maintains a relatively high and stable success rate (above 70%). This indicates that our communication structure is effective and scalable, capable of handling more complex problems.\\n\\nFor the experiments raw data, you can check this file: [exp_scalability.zip](https://github.com/EMOS-project/EMOS-project.github.io/blob/main/rebuttal/exp_scalability.zip)\\n\\n### **Explanation of changing experiment setting**\\nWe are conducting this experiment on a workstation with a single RTX 4090. Due to the limitations of the simulator and computational constraints, rendering camera sensors for 20 robots always caused the program to crash. If time permits, there might be a workaround for the simulation to reduce the resource consumption. As a result, we simplified the experimental settings and removed the setting for 20 robots.\\n\\nWe thanks again for your thoughtful question.\"}", "{\"title\": \"Official Comment by Authors (part 5)\", \"comment\": \"### **Suggestion on using prior knowledge to improve planning and decision-making**\\n\\nWe appreciate the reviewer's insightful suggestion about incorporating prior knowledge to improve planning efficiency. We want to clarify that our EMOS framework already implements this idea through our novel \\\"robot resume\\\" approach (Sec 3.3), but in a more comprehensive and flexible way than using simple context/class vectors. Here's why:\\n\\n1. Hardware-specific capabilities: Our robot resume captures each robot's physical capabilities by analyzing their URDF files and using forward kinematics to generate numerical specifications. For example, for a quadcopter, its resume would indicate no manipulation capability, automatically excluding it from relevant tasks.\\n2. Context representation: The robot resume provides both textual summaries for common-sense reasoning and numerical specifications for precise spatial calculations. This dual representation allows agents to quickly filter out infeasible tasks while maintaining the ability to perform detailed geometric verification when needed.\\n3. Handling complex tasks: Simple context vectors would struggle with complex, compositional tasks. For example, consider a task like \\\"Find the toy in the bedroom on the second floor and place it on the high shelf in the living room.\\\" While a context vector might indicate a robot has manipulation capability, our resume-based approach can:\\n - Use numerical workspace analysis to verify if the robot arm can reach the high shelf height\\n - Check mobility constraints for multi-floor navigation\\n - Consider perception capabilities to ensure the robot can visually locate the toy\\n This level of detailed capability matching would be difficult to encode in simple class vectors.\\n\\nRegarding robot availability, while our current implementation focuses on physical capabilities, the framework can be naturally extended to include dynamic state information (e.g., current task status) in the robot resume.\\n\\nFinally, we really appreciate the reviewer for the careful review of this paper, and we will correct the grammar errors in the paper and update the revised version of the paper as soon as possible.\"}", "{\"summary\": \"This paper presents the Embodiment-Aware Heterogeneous Multi-Robot Operating System (EMOS), an LLM-driven, multi-agent system designed to manage diverse robots in complex household tasks.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is well-written and explores an interesting application. In essence, it describes a system with multiple robots, each equipped with an LLM and distinct capabilities. The robots communicate to determine task distribution, using robot resumes to identify which robot is best suited for each task. Through hierarchical and decentralized planning, they collaborate to complete complex tasks in a shared environment, leveraging spatial reasoning and embodiment awareness to enhance coordination.\\n1) The main contribution appears to center around how questions are passed to the pre-trained LLM for task allocation in a multi-robot system. Authors proposed a robot resume method, that creates a dynamic capability profile for each robot. This profile includes an understanding of each robot's URDF files, enabling the system to call upon robot kinematics tools to generate detailed descriptions of their physics capabilities.\\n2) Habitat-MAS simulation: It is designed for multi-agent systems, facilitating complex task management using heterogeneous team of robots.\", \"weaknesses\": \"Presenting results is not a contribution. Instead, it serves as a validation that the proposed methodology works as intended and is better than baselines.\", \"questions\": \"I don't have any questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer jjgY\", \"comment\": \"We do appreciate the reviewer's insightful suggestions and taking the time to engage deeply with our work. The suggestions from the reviewer greatly help us improve the paper. Also, we are grateful that our efforts and dedication are recognized by the reviewer.\"}", "{\"title\": \"Second kind reminder for response\", \"comment\": \"Dear reviewer,\\n\\nWe hope this message finds you well. This message kindly reminds you that the deadline for the authors to upload the revised paper is November 27th. Please let us know your ideas about the current revision and further questions/ suggestions if any. \\n\\nWe understand you are quite busy these days due to the heavy workload of rebuttal and discussions. We hope you are satisfied with our latest revision, in which we tried to address three suggestions in your review. We will also be happy to hear from you about further revision suggestions/ discussions. \\n\\nKind regards,\\n\\nThe authors\"}", "{\"title\": \"Update of Revision V1\", \"comment\": [\"Dear reviewers,\", \"We greatly appreciate all the constructive comments from the reviewers. We have revised our paper according to the suggestions. The updates are as follows:\", \"We reorganized the `Section 2. Related Works` and their relations to our paper as suggested by **Reviewer jjgY** and **Reviewer xZUU**. Specifically, we divided the \\\"Multi-Agent System\\\" subsection into two sections \\\"LLM-Based Multi-Agent System\\\" and \\\"Heterogeneous Multi-Agent Learning\\\". we also updated the \\\"multi-robot systems\\\" section with new references. All the mentioned related works by these two reviewers have been added to the reference list.\", \"We added `Appendix A.2 MULTI-AGENT SYSTEM DESIGN AND COMMUNICATION` according to the request by **Reviewer xZUU** to add the agents' communication graph\", \"We added `Appendix B.4 DETAILED IMPLEMENTATION OF ROBOT LOW-LEVEL CONTROL IN BENCHMARK` according to the request by **Reviewer xZUU**.\", \"We added `Appendix C.5 EXTRA EXPERIMENT ON FORMAT OF ROBOT RESUME` according to the question by **Reviewer 79K2**\", \"We have highlighted all the revised/added parts with blue color for the reviewers' convenience. We will remove the colorization in the formal version. We are also looking forward to reviewers' feedback on this revision.\"]}", "{\"title\": \"Kind reminder for review response\", \"comment\": \"Dear reviewer,\\n\\nWe hope this message finds you well. We are writing to request your response to our rebuttal comment kindly. We would appreciate it if you could review our response and we could have helpful discussions here.\", \"we_have_revised_the_draft_as_you_suggested\": \"- We reorganized the `Section 2. Related Works` and their relations to our paper as suggested by **Reviewer jjgY** and **Reviewer xZUU**. Specifically, we divided the \\\"Multi-Agent System\\\" subsection into two sections \\\"LLM-Based Multi-Agent System\\\" and \\\"Heterogeneous Multi-Agent Learning\\\". All the mentioned related works by these two reviewers have been added to the reference list. \\n\\nWe are looking forward to your responses and we also welcome further discussions. \\n\\nKind regards,\\n\\nThe authors\"}", "{\"title\": \"Official Comment by Authors (Part 2)\", \"comment\": \"### **Scalability of the framework**\\n\\nWe recognize the importance of scalability of a multi-agent system and we are grateful to the reviewer for this question. Now we are emergently adding a new experiment to study the system's performance with the increase of robot numbers in the environment. We have started these two experiments: \\n\\n- In the first experiment, we are scaling the number of robots performing the same task. In our experiments, we sample 10 episodes from the manipulation task and evaluate performance across different numbers of Fetch robots (2, 4, 6, 10, and 20) to assess communication efficiency and success rate. However, due to computational limitations and the limited time of the rebuttal session, this experiment is still ongoing.\\n- In the second experiment, we plan to study the impact of increasing task complexity by scaling up the number of objects to manipulate. While maintaining a fixed number of Fetch robots (2), we evaluate the system's performance with varying numbers of objects (1, 2, 3, 5, and 10).\\n\\nSome existing results from the first ongoing experiment indicate that, with the key design of leader allocation and group discussion, the system can successfully assign a minimal yet sufficient number of robots to complete the manipulation tasks.\\n\\nWe will report the experiment results with analysis in the revised paper and comment boxes as soon as possible. \\n\\n### **Question on the impact of robot resume format on performance**\\nThank you for your insightful question on the format of the robot resume. To answer the question, we conduct experiments among different formats of robot resumes (JSON, natural language, markdown, and XML), which are generated using GPT-4o:\\n- We sample 10 episodes from the perception task and evaluate the success rate of each format. The performance is mainly restricted by large language models' (GPT-4o) understanding capabilities. \\n- We present our experimental results on the success rate of each format setting below:\\n| | JSON | Natural Language | Markdown | XML |\\n| ------- | ---- | ---------------- | -------- | ---- |\\n| Average | 0.7 | 0.3 | 0.5 | 0.6 |\\nYou can also download this [exp_robot_format.zip](https://github.com/EMOS-project/EMOS-project.github.io/blob/main/rebuttal/exp_robot_resume_format.zip) to check the data. \\n- According to our experiments, structured formats (JSON and XML) of robot resumes achieve higher success rates than unstructured formats (natural language). The success rates get higher as the format gets more structured. \\n- During experiments, we also found that some formats of robot resumes will lead to LLM hallucinations. In the setting of markdown and XML, the agents will fail to generate the correct format of actions, resulting in a 0% success rate. We then refine the prompt with minimum modification for meaningful results for comparison.\\n\\nTo answer the question about how to generate a better-formatted resume, we suggest using structured formats like JSON format in our frameworks, rather than loosely structured formats like natural language.\"}", "{\"title\": \"Kind reminder for review response\", \"comment\": \"Dear reviewer,\\n\\nWe hope this message finds you well. We are writing to request your response to our rebuttal comment kindly. We would appreciate it if you could review our response and we could have helpful discussions here.\", \"we_have_revised_the_draft_as_you_suggested\": \"- We reorganized the `Section 2. Related Works` and their relations to our paper as suggested by **Reviewer jjgY** and **Reviewer xZUU**. \\n- We added `Appendix A.2 MULTI-AGENT SYSTEM DESIGN AND COMMUNICATION` according to the request by **Reviewer xZUU** to add the agents' communication graph\\n- We added `Appendix B.4 DETAILED IMPLEMENTATION OF ROBOT LOW-LEVEL CONTROL IN BENCHMARK` according to the request by **Reviewer xZUU**. \\n\\nPlease let us know if you have any more questions and we are happy for further discussions.\\n\\nKind regards,\\n\\nThe authors\"}", "{\"title\": \"Official Comment by Authors (part 2)\", \"comment\": \"### **Risk consideration in planning and decision-making process**\\n\\nWe do think this is a good question and we did not cover the discussions from this perspective in the main paper. \\n\\nIn short, we don\\u2019t explicitly intervene with risk management, instead, the central planner conducts task assignments and avoids assignment risk through the reasoning ability of LLMs based on implicit hints. The LLM planner will implicitly consider the potential risks based on its common sense. We set some limitations in the prompt to ensure that the LLM-based planner will consider all robot agent individuals when performing task planning and ensure that all subtasks are assigned to a certain robot. During this process, it is possible that the same robot may be assigned multiple subtasks, or a certain robot may not be assigned any task, which are allowed and not regarded as risks. In the subsequent group discussion process, the subtasks are re-assigned through the reflection of robot agents, and it\\u2019s also used to check that there is no such risk that the same subtask is assigned to multiple agents. If an assignment risk occurs, the discussion will be repeated until the risk is eliminated. \\n\\nHowever, our work focuses more on heterogeneous multi-robot collaboration, aiming to propose the concept of embodiment-aware multi-agent task planning. We do appreciate the reviewer\\u2019s valuable question, and we will discuss the limitations of our work on this aspect and the potential improvement towards enhanced risk-aware planning and decision-making.\"}", "{\"summary\": \"This paper introduces a novel framework for controlling heterogeneous multi-agent collaboration, named EMOS. This framework integrates recent advancements in large language models (LLMs) and multimodal models, enabling intelligent cooperation among multiple robots. Unlike previous works, which primarily relied on role-playing approaches, this multi-agent framework innovatively proposes a robot resume based on the robot's URDF files. This enhancement allows for a more precise description of the robots' capabilities, thereby facilitating more efficient communication and collaboration among them. Additionally, the authors present a new benchmark named Habitat-MAS. This benchmark validates the framework's effectiveness. The paper also provides several specific experimental results, which have been published on a dedicated website.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Robots\\u2019 resumes are generated by the LLM based on urdf files, which enhance the communication among robots. This paper more effectively leverages the intelligence of LLMs in addressing the challenges of heterogeneous multi-robot collaboration. LLM can not only play a planner role but also promote the communication effectiveness among robots.\", \"weaknesses\": \"The experimental section of the paper requires additional enhancements. While the authors conducted ablation studies on their framework and provided detailed results, there is a notable absence of comparisons with similar frameworks.\", \"questions\": \"1\\uff09In the domain of heterogeneous multi-agents, how does the proposed framework's use of robot resumes improve performance compared to traditional role-playing methods? Moreover, does the framework consistently outperform others across various scenarios and different robot configurations?\\n2\\uff09All robot are completely interconnected, and they are collaborative under complete information. Can the proposed framework be extended to scenarios where there is a game relationship between heterogeneous individuals.\\n3) What is the scalability of the framework? How does the increase in the scale of collaborative robots affect the success rate of tasks?\\n4\\uff09 The format of resumes is artificially defined. Will the format of resumes affect the efficiency of robot assistance? How to generate a better formatted resume?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We really thank the reviewer's time and effort in reviewing our materials to clarify this issue. We are also happy to see the decision by the reviewer to accept our paper.\"}", "{\"title\": \"Reply to Question about Formats of Robot Resumes\", \"comment\": \"We are very grateful for the reviewer\\u2019s feedback.\\n\\nAs for the pros and cons of different format resumes for LLM reasoning, our experiments reveal that deeply nested XML or freeform natural language pose challenges for LLM comprehension due to issues like ambiguous field relationships or inconsistent representations, which could cause hallucination in embodied reasoning. In contrast, structured formats like JSON are more easily understood as they provide clear key-value mappings and reduce reliance on contextual reasoning.\\n\\nTo address the common issues across formats, including lack of standardization and increased complexity in parsing, we could adopt a unified format specification to ensure consistent representation and use preprocessing techniques resulting in a JSON-like format to simplify complex format to a flat and structured form.\"}", "{\"comment\": \"Thanks for the author's response. I do not have any other issue for the submission, and this paper is suitable for ICLR.\"}", "{\"title\": \"Thanks for the relevance statement\", \"comment\": \"Dear Authors,\\n\\nThanks for the detailed relevance statement, upon checking the statistics I am convinced that this paper is indeed suitable for ICLR. I will change my prior opinion and edit the review accordingly. Aside from the initial concerns about relevance, I find no other issues with the submission.\"}", "{\"comment\": \"The author's response has resolved most of my confusion, but I would like to further discuss the last question with the author.\\n\\nThe author mentioned that certain formats of resumes can cause LLM to be unable to understand. What are the drawbacks of these formats compared to other understandable formats? Can we conduct a simple analysis and find some commonalities, so that the LLM can understand all formats of resumes through certain techniques?\"}", "{\"title\": \"Update on Scalability Experiments' Results\", \"comment\": \"We really thank the reviewer's insightful question on the scalability of the current framework. As promised before, we are now updating the results of the scalability experiments:\\n\\n### **Table 1: Scalability with Increasing Agent Numbers**\\n\\n| Number of Robots | Success Rate (%) | Token Usage |\\n|-------------------|------------------|-------------|\\n| 2 | 80% | 48779 |\\n| 4 | 60% | 73202 |\\n| 6 | 70% | 93252 |\\n| 10 | 50% | 151952 |\\n\\nIn the first experiments of scaling up the robot number, we found that as the number scales up, the multi-agent system will face problems like hallucinations (in the setting of 10 agents) and the average success rate will decline. This is as expected since the hallucination problem in LLM is prevalent and it becomes worse with the increase of context length. This could be alleviated with more powerful LLM models as we have witnessed amazing progress in LLM model capability in the past year. Or designs like hierarchical communication with smaller sub-group discussions and larger group aggregation (delegate meeting) could help to solve the scalability problem in multi-agent discussion.\\n\\n\\n### **Table 2: Scalability with Increasing Task Complexity**\\n\\n| Number of Objects | Success Rate (%) | Token Usage |\\n|--------------------|------------------|-------------|\\n| 1 | 90% | 25778 |\\n| 2 | 80% | 50005 |\\n| 3 | 80% | 87668 |\\n| 5 | 70% | 197485 |\\n\\nIn the second experiment, which involved scaling up task complexity, we found that our system demonstrates robustness to a certain extent. As the number of objects increases\\u2014representing greater task complexity\\u2014the system maintains a relatively high and stable success rate (above 70%). This indicates that our communication structure is effective and scalable, capable of handling more complex problems.\\n\\nFor the experiments raw data, you can check this file: [exp_scalability.zip](https://github.com/EMOS-project/EMOS-project.github.io/blob/main/rebuttal/exp_scalability.zip)\\n\\n### **Explanation of changing experiment setting**\\nWe are conducting this experiment on a workstation with a single RTX 4090. Due to the limitations of the simulator and computational constraints, rendering camera sensors for 20 robots always caused the program to crash. If time permits, there might be a workaround for the simulation to reduce the resource consumption. As a result, we simplified the experimental settings and removed the setting for 20 robots.\\n\\nWe thank you again for your thoughtful question.\"}", "{\"summary\": \"This paper proposed a heterogeneous multi-robot system framework to enable effective collaboration among heterogeneous robots with varying embodiments and capabilities. However, from my perspective, the topic of this paper is too big, making it more like a technical instruction or report rather than a research paper. Moreover, many details are not clearly explained, such as low-level planning and control (collision avoiding), mid-level decision-making, and high-level learning. Moreover, the communication graph is an important part of the MAS design, which is also not mentioned in the framework. I suggest the authors reorganize the paper, abstract the model, and then discuss it from theoretical and practical perspectives.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper introduced a heterogeneous multi-robot system framework to enable effective collaboration among heterogeneous robots with varying embodiments and capabilities. It implements the proposed LLM-based multi-agent framework in indoor household environments with four tasks. The paper presents the LLM-based MAS framework conducting embodiment-aware reasoning in simulation.\", \"weaknesses\": \"From my perspective, the topic of this paper is too big, making it more like a technical instruction or report rather than a research paper. Many details are not clearly explained, such as low-level planning and control (collision avoiding), mid-level decision-making, and high-level learning. Moreover, the communication graph is an important part of the MAS design, which is also not mentioned in the framework. Furthermore, several robots' plannings have apparent errors in the simulation video. For example, at the 16 seconds of the video, the drone does not follow the routine but goes through the wall directly, etc. I hope the authors can improve it and consider more details about different modules in the framework design.\", \"questions\": \"I suggest the authors reorganize the paper, abstract the model and ideas, and then discuss it from theoretical and practical perspectives,\\nsuch as how to formularize the capabilities of heterogeneous multi-robot systems to satisfy various tasks' requirements, how to define the specific roles and relationships in the group, how to organize their behaviors or strategies to optimize system performances, etc.\", \"i_suggest_the_authors_check_and_refer_the_paper_as_below\": \"1) Yang, Q., & Parasuraman, R. Hierarchical needs based self-adaptive framework for cooperative multi-robot system. In 2020 ieee international conference on systems, man, and cybernetics (smc) (pp. 2991\\u20132998). IEEE.\\n\\n2) H. Hamann and H. Wo \\u0308rn, \\u201cA framework of space\\u2013time continuous models for algorithm design in swarm robotics,\\u201d Swarm Intelligence, vol. 2, no. 2-4, pp. 209\\u2013239, 2008.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal comment\", \"comment\": \"We thank the reviewer for the time and effort you have invested in evaluating our work. We are grateful for the positive comments on the soundness and presentation of our paper. Although the reviewer thinks differently and **\\u201cwould recommend the authors to submit it for a Robotics conference\\u201d**, we would like to address the reviewers\\u2019 concerns and do more clarifications:\\n\\n### **Relevance to ICLR scope**\\n\\nFirstly, we would like to clarify why this paper is relevant to ICLR scope, in response to the reviewer's comment to submit to the Robotics conference. Due to the length limitation of the comment box, we have written a comprehensive relevance statement, available at [Relevance Statement](https://github.com/EMOS-project/EMOS-project.github.io/blob/main/relevance_statement/relevance_statement.pdf). You can also check all the open-source data and data processing script in https://github.com/EMOS-project/EMOS-project.github.io/tree/main/relevance_statement\\n\\nIn short, our work aligns closely with ICLR's growing focus on robotics, embodied AI, and benchmarking, as evidenced by the increasing number of accepted papers in these areas over the past few years. \\n\\n### [(CLICK) statistics plot](https://github.com/EMOS-project/EMOS-project.github.io/blob/main/relevance_statement/paper_statistics.png) \\n\\nWe highlight that, only in ICLR 2024, there are 83 robotics and 103 benchmark papers ACCEPTED. EMOS addresses key challenges in embodiment-aware task planning that are currently missing in the Embodied AI community, making it highly relevant to ICLR's scope.\\n\\n### **Regarding the novelty of our contribution**\\n\\nWhile we understand the reviewer\\u2019 s perspective, we believe our work presents significant innovations in the field of embodied AI and multi-robot systems. EMOS addresses a crucial gap in embodiment-aware task planning for heterogeneous multi-robot systems, which has not been adequately explored in previous research. Our approach of leveraging large language models (LLMs) for this purpose is novel and aligns with the growing focus on robotics and embodied AI at ICLR, as evidenced by recent accepted papers in this domain.\\n\\n### **Suggestion of fine-tuning the LLM**\\n\\nWe appreciate your suggestion to explore custom fine-tuning of the LLM to be tailored for our application. However, the focus of our paper is leveraging the strong priors and reasoning capabilities of pre-trained LLMs and design of LLM multi-agent systems to solve way too complex problems for a single LLM model, i.e. in the context of \\u201cLLM agents\\u201d. We agree that further adaptation could enhance performance, and we plan to explore this direction in future work.\\n\\n### **Contribution of Habitat-MAS simulation**\\n\\nThe purpose of this benchmark platform extends beyond mere implementation. It is for the broader Embodied AI community, enabling researchers to study and develop methods for further automation of complex multi-robot systems in a more general problem setting. **As we stated in lines [92-96] of the submission paper, our benchmark tasks are processed such that not all robots or random agents can finish a specific task.** The LLM agents need to understand their physical capabilities for the task planning. \\n\\nFinally, we really thank the reviewer's time and effort in reviewing this paper. We also value the different perspectives the reviewer held at the beginning. We do hope the statistics and clarifications above can help us win the reviewer back. We would really appreciate it if the reviewer is willing to have more discussions here and we will respond with our best effort.\"}", "{\"title\": \"Kind reminder for review response\", \"comment\": \"Dear reviewer,\\n\\nWe hope this message finds you well. We are writing to request your response to our rebuttal comment kindly. We would appreciate it if you could review our response and we could have helpful discussions here. \\n\\nKind regards,\\n\\nThe authors\"}", "{\"metareview\": \"The paper proposes EMOS, a framework that uses large language models (LLMs) to dynamically coordinate the behavior of teams of heterogeneous robots according to their physical design. EMOS generates a \\\"robot resume\\\" based upon a robot's kinematics (as determined by its URDF) that it then uses to determine their roles. Additionally, the paper proposes a new benchmark (Habitat-MAS) for evaluating multi-agent coordination for tasks that involve manipulation, navigation, and object rearrangement within a multi-floor building.\\n\\nThe paper was reviewed by four referees who largely agreed on the paper's strengths an weaknesses. Among the strengths, several reviewers appreciated the use of LLMs to dynamically allocate tasks across robot teams based upon their kinematics, as opposed to the more traditional approach of pre-defining their roles. At the same time, several reviewers shared the concern that the paper read more like a project report that described a particular prompting strategy, as opposed to a research paper, and that it lacked several key details about the framework. In their response to the reviewers, the authors addressed the need for more detailed explanations of EMOS, however they did not speak to the concern about the presentation style. Meanwhile, at least two reviewers emphasized the need to compare against traditional role playing approaches, while one asked for an evaluation of how the method scales as the number of robots increases and the tasks become more complex. The authors clarified that the initial submission already provided a comparison to role playing approaches, while they provided an additional analysis of scalability during the rebuttal period.\", \"additional_comments_on_reviewer_discussion\": \"There was a healthy amount of discussion between the authors, reviewers, and the AC. This included a discussion of concerns with can initial review that questioned the relevance of a systems-focused paper for ICLR as well as statements that the authors found to be lack adequate justification. The AC discussed these concerns with the reviewer, who then updated their review.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Relevance Statement\", \"comment\": \"For the convenience of the reviewer and AC, we have pasted the content of the Relevance Statement under this comment for reference.\\n\\n### **Relevance Statement**\\n\\nWe would like to state that our work on the Embodiment-Aware Heterogeneous Multi-Robot Operating System (EMOS) aligns closely with the scope of ICLR, particularly in areas like robotics, embodied AI, and benchmarking.\\n\\nAccording to statistics about **ICLR accepted papers from 2022 to 2024** (data source is from [papercopilot](https://papercopilot.com/statistics/iclr-statistics/), there has been a growing focus on these topics, which can referred to the [figure 1](https://github.com/EMOS-project/EMOS-project.github.io/blob/main/relevance_statement/paper_statistics.png?raw=true). This figure shows the filters relevant past ICLR papers and visualizes trends in topics such as Robotics and Benchmarks, verifying the alignment of our work with the increasing focus of ICLR community.\\n\\nThen, we will specifically list and analyze some of the works in the past few years at ICLR that are similar to the topic or contributions of our article.Large-scale pretrained models, especially LLMs, trained on internet-scale datasets, bring strong priors and reasoning capabilities to everyday tasks. Many interesting ICLR papers have explored LLM applications for robotic manipulation and other embodied agent tasks. Notable among ICLR 2024:\\n\\n- \\\"**Programmatically Grounded, Compositionally Generalizable Robotic Manipulation**\\\", which leverages pre-trained model modularity to aid robotic manipulation.\\n- \\\"**Building Cooperative Embodied Agents Modularly with Large Language Models**\\\", which uses LLMs as different modules within multi-agent systems.\\n- \\\"**Steve-Eye: Equipping LLM-based Embodied Agents with Visual Perception in Open Worlds**\\\", an end-to-end LLM-based model enabling embodied agents to perceive their environment.\\n- \\\"**Habitat 3.0: A Co-Habitat for Humans, Avatars, and Robots**\\\", presents a simulation platform for human-robot interaction tasks.\\n\\nAdditionally,\\n- \\\"**GenSim: Generating Robotic Simulation Tasks via Large Language Models**\\\", applies LLMs to robotic task generation and policy learning.\\n- \\\"**Vision-Language Foundation Models as Effective Robot Imitators**\\\", fine-tunes large models for robot gripper control, both highly judged as spotlight papers by the ACs.\\n\\nIn the area of agent benchmarking and multi-agent systems, exciting works include:\\n- \\\"**LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents**\\\"\\n- and \\\"**SmartPlay: A Benchmark for LLMs as Intelligent Agents**\\\", all explore various aspects of LLM evaluation.\\n- Of particular note, the oral paper \\\"**MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework**\\u201d introduces an innovative role-play approach for multi-agent collaboration.\\n\\nOur work aims to identify the problem of embodiment-aware task planning that is still missing in the Embodeid-AI community, providing a platform to study this problem and provide an initial method to try to address these challenges in heterogeneous multi-robot systems. While it\\u2019s true that a large proportion of our efforts fall in simulation engineering for the benchmark, one of the main motivations is to provide a platform to share with the Embodied AI community to work on this direction of further automation of complex multi-robot systems with a more general problem setting. In the methodology part, by leveraging the priors that LMs hold about robot capabilities parsing from URDF and their decision-making capabilities in complex tasks, we address a practical issue of embodiment-aware understanding innovatively. By building a hierarchical multi-agent system that leverages the synchronized communication and distributed execution, we provide a practical approach that is tailored for real-time multi-robot system settings, and this could serve as a starting point for more comprehensive system development.\\n\\nIn conclusion, our work on EMOS aligns with ICLR's growing focus on robotics and embodied AI, addressing the crucial gap in embodiment-aware task planning for heterogeneous multi-robot systems, while providing a benchmark platform and innovative methodology leveraging large language models.\"}", "{\"title\": \"Response to Authors\", \"comment\": \"I really appreciate the thorough discussions, additional experiments, and mindful clarifications provided by the authors in their rebuttals. I increased my score respectively as I believe both my understanding of the contributions and the paper itself both are in much better shapes now.\\n\\nGood Luck.\"}", "{\"title\": \"Third kind reminder for response\", \"comment\": \"Dear reviewer,\\n\\nWe hope this message finds you well. This message kindly reminds you that the **deadline for the reviewer to respond is Dec. 2nd**. Please let us know your questions or suggestions, if any.\\n\\nWe understand you are quite busy these days due to the heavy workload of rebuttals and discussions. We hope you are satisfied with our revision to address the three questions/ suggestions in your review.\\n\\nSince our last reminder, we also have the following updates on the draft: \\n- We added `C.6 EXTRA EXPERIMENTS ON SCALABILITY OF EMOS`. This section discusses the scalability of the EMOS framework.\\n\\nWe are looking forward to hearing from you about further revision suggestions and discussions.\\n\\nKind regards,\\n\\nThe authors\"}", "{\"comment\": \"The author's response has resolved most of my confusion, but I would like to further discuss the last question with the author.\\n\\nThe author is confident in the framework they have provided and proposes two possible methods to extend the proposed framework to game problems in incomplete communication situations. The reviewer would like to know what problems may be encountered when using these two methods. Can it still be simply transplanted over?\"}", "{\"title\": \"Official Comment by Authors (part 4)\", \"comment\": \"### **Question on how the system handles partial observable environment**\\n\\nWe thank the reviewer for this visionary question. Firstly, we admit that we assume the multi-robot system is equipped with a perfect multi-agent SLAM system, providing a full observability of the environment, as we mentioned in section 3.1, lines 203-206. While our current implementation focuses on fully observable scenarios, we also recognize the need to address partial observability for real-world applications. For this purpose, we propose several extensions that could extend our current system to handle partial observability: \\n- Probabilistic State Estimation: We could incorporate probabilistic methods like Bayesian filtering or particle filters to estimate the full state based on partial observations. This would allow agents to reason about uncertainty in their knowledge of the environment.\\n- Active Perception: We could incorporate active perception strategies, where agents actively seek information to reduce uncertainty about critical aspects of the environment.\\n- Belief-Space Planning: Instead of planning in the state space, we could adapt our planning algorithms to operate in the belief space, accounting for uncertainty in the current state and future outcomes of actions.\\n\\nThere are two challenges for this adaptation we think matter in this dynamic decision process. Firstly, agents need to synchronize observation with global representation to update the state, and this could be very costly and slow with LLMs. There could be some classical distributed SLAM algorithm to reduce this overhead. Secondly, reasoning about partial observability typically increases the computational burden and instability, which may require optimizations to maintain real-time performance.\\n\\n### **Scalability of the framework**\\n\\nWe recognize the importance of scalability of a multi-agent system and we are grateful to the reviewer for this question. Now we are emergently adding a new experiment to study the system's performance with the increase of robot numbers in the environment. We have started these two experiments: \\n\\n- In the first experiment, we are scaling the number of robots performing the same task. In our experiments, we sample 10 episodes from the manipulation task and evaluate performance across different numbers of Fetch robots (2, 4, 6, 10, and 20) to assess communication efficiency and success rate. However, due to computational limitations and the limited time of the rebuttal session, this experiment is still ongoing.\\n- In the second experiment, we plan to study the impact of increasing task complexity by scaling up the number of objects to manipulate. While maintaining a fixed number of Fetch robots (2), we evaluate the system's performance with varying numbers of objects (1, 2, 3, 5, and 10).\\n\\nSome existing results from the first ongoing experiment indicate that, with the key design of leader allocation and group discussion, the system can successfully assign a minimal yet sufficient number of robots to complete the manipulation tasks.\\n\\nWe will report the experiment results with analysis in the revised paper and comment boxes as soon as possible.\"}", "{\"title\": \"Official Comment by Authors (part 3)\", \"comment\": \"## **Questions on how the system conducts task assignment**\\n\\nWe are amazed by how detailed and thoughtful this question is, and we are grateful to the reviewer for his/her time and dedication to this review. Considering there are multiple sub-questions contained in this question, we will address them in sequence: \\n\\n### **How the central planner optimizes task assignment**\\n\\nIn this question, the reviewer asks about how our system handles task assignment under three scenarios with (1) multiple equally capable robots, (2) differently capable robots that can all execute a task, or (3) universally capable robots. \\n\\nWe apologize for not clearly explaining the central planner's optimizing process. Since the central agent is LLM-based, the reason why we set up a benchmark consisting of 4 tasks is also to test the LLM\\u2019s ability in task assignment on the one hand. We optimize the task assignment through iterative group discussions and add some limitations to the prompt. Regarding the special cases mentioned by the reviewer, it relies more on the LLM\\u2019s reasoning ability on the robot resume and environment context. Our optimization approach is to set limitations through the prompt and add sufficient information in the robot resume and environment context to assist the reasoning of the central planner.\\n\\n### **Does the LLM-based \\u201cdiscussion\\u201d incorporate strategic negotiation or cognitive hierarchy mechanisms**\\n\\nOur discussion does not involve traditional strategic negotiation or cognitive hierarchy mechanisms. As mentioned above, we guide the central planner to make correct judgments through some rule limitations and improve the assignment of the central planner through the embodied reasoning of the robot agent. It is a hierarchical structure similar to a 2-level thinking. The central planner has the highest level of cognition and decomposes and assigns the global tasks, while the robot agent only reflects on the tasks assigned to it in combination with the robot resume.\\n\\n### **Can you provide specific examples for the scenarios**\", \"these_three_special_cases_can_be_simplified_as_follows\": \"Multiple robots are capable of executing the same subtask. Our current system optimizes this situation in the following way: When multiple robots can execute the same subtask, the optimization goal changes to require the lowest energy cost for the entire system. We introduce information such as the positions of each object and robot in the environment context and other attributes like battery life in the robot resume. The central planner can first calculate information such as the geodesic distance between each robot and the target object through function calls. Finally, it calculates the energy cost for each robot to execute this subtask. Then, the central planner assigns the task according to the calculation results.\\n\\nNext, we will give a specific example for each special case.\\n\\n- For multiple equally capable robots, for example, 2 spots in the multi-floor navigation tasks that have objects on different floors to be rearranged, we will explicitly provide the current position and target position of each spot. LLMs can judge the distance cost for each spot to reach the goal position. Through group discussion, our system will assign the nearest robot capable of completing the task to execute. For example, if one spot is on the first floor, while the other is on the second floor, and the target object is on the first floor, then the central planner will assign the navigation task to the spot on the first floor to minimize the energy cost. The task is to navigate to a cabinet on the second floor. Then the latter spot will be assigned.\\n- For differently capable robots, if all the differently capable robots can execute the same task, the LLMs will first consider the task cost just like we mentioned above, and then assign different robots to complete the tasks simultaneously. For example, if both the fetch and stretch can reach the objects placed in the house, and the fetch is closer in geodesic distance to the target object than the stretch, then the central planner will assign the rearrange task to fetch to pick the nearest objects simultaneously for efficiency.\\n- The situation of a universally capable robot is similar to the first case.\\n\\nWe really thank the reviewer for your question and we hope these discussions help answer your questions.\"}" ] }
ExuBFYtCQU
MaskGCT: Zero-Shot Text-to-Speech with Masked Generative Codec Transformer
[ "Yuancheng Wang", "Haoyue Zhan", "Liwei Liu", "Ruihong Zeng", "Haotian Guo", "Jiachen Zheng", "Qiang Zhang", "Xueyao Zhang", "Shunsi Zhang", "Zhizheng Wu" ]
The recent large-scale text-to-speech (TTS) systems are usually grouped as autoregressive and non-autoregressive systems. The autoregressive systems implicitly model duration but exhibit certain deficiencies in robustness and lack of duration controllability. Non-autoregressive systems require explicit alignment information between text and speech during training and predict durations for linguistic units (e.g. phone), which may compromise their naturalness. In this paper, we introduce $\textbf{Mask}$ed $\textbf{G}$enerative $\textbf{C}$odec $\textbf{T}$ransformer (MaskGCT), a fully non-autoregressive TTS model that eliminates the need for explicit alignment information between text and speech supervision, as well as phone-level duration prediction. MaskGCT is a two-stage model: in the first stage, the model uses text to predict semantic tokens extracted from a speech self-supervised learning (SSL) model, and in the second stage, the model predicts acoustic tokens conditioned on these semantic tokens. MaskGCT follows the mask-and-predict learning paradigm. During training, MaskGCT learns to predict masked semantic or acoustic tokens based on given conditions and prompts. During inference, the model generates tokens of a specified length in a parallel manner. Experiments with 100K hours of in-the-wild speech demonstrate that MaskGCT outperforms the current state-of-the-art zero-shot TTS systems in terms of quality, similarity, and intelligibility. Audio samples are available at https://maskgct.github.io/. We release our code and model checkpoints at https://github.com/open-mmlab/Amphion/blob/main/models/tts/maskgct.
[ "text-to-speech synthesis", "masked generative models", "codec language models", "voice cloning" ]
Accept (Poster)
https://openreview.net/pdf?id=ExuBFYtCQU
https://openreview.net/forum?id=ExuBFYtCQU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wBTaZLczFE", "rrIEoxRvGQ", "qcPzwlOflS", "ptqP68QRJW", "p2exWbaJAU", "ooSYTHCLVq", "nzj2pWkmdQ", "nm2L5r9p2t", "lf4gHIfvFL", "lOtUH7cct1", "l7NRz52vga", "iY0kYMuPSl", "h9wURsDfTe", "foSr0PE0VL", "dx1c6fPEkL", "bNnMccLvzi", "Y8m5wotR9C", "W5cd3aoGVg", "VuKicolInn", "VfmH3D1usB", "VQOFgfEH7q", "U4pyC3Aabj", "Tr3vi5qiSf", "TnoPChPCYx", "RibyhVG6GE", "RQqS3miAhk", "RLQQyw88V9", "R0Y4u6sSSM", "QFbuYpRphL", "Q81pl7ABaR", "LjC1DtFCve", "IyOGA9eK13", "FqH6Cq2w69", "EVkqMryn1U", "DyVtSSX59A", "AN6Fu0er3n", "9V0f4YTVjX", "87fSiYxzA5", "7iEnpxT1tB", "3u8a0TI2KD", "063jK3qPOJ" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731857399889, 1729994396504, 1730035273956, 1732325478759, 1732791438993, 1733207203787, 1731855950763, 1731855722961, 1732685235274, 1732265983577, 1731857292155, 1732084160740, 1732084208820, 1732084225214, 1733207390480, 1732786863171, 1732265989282, 1731857724215, 1732668769748, 1731857545315, 1731855576568, 1732864415436, 1731857690856, 1730589448016, 1731857631422, 1731857598403, 1732265889095, 1732684567211, 1732265957001, 1733207169029, 1730711418491, 1732685264658, 1737523962393, 1732084216955, 1732685248879, 1733681232481, 1731857213866, 1732683061591, 1731855820739, 1732516770466, 1731856055046 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Reviewer_EzBH" ], [ "ICLR.cc/2025/Conference/Submission9126/Reviewer_AvMh" ], [ "ICLR.cc/2025/Conference/Submission9126/Reviewer_n456" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Reviewer_AvMh" ], [ "ICLR.cc/2025/Conference/Submission9126/Reviewer_QWox" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Reviewer_EzBH" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Reviewer_n456" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Reviewer_QWox" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Area_Chair_G2K9" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ], [ "ICLR.cc/2025/Conference/Submission9126/Reviewer_AvMh" ], [ "ICLR.cc/2025/Conference/Submission9126/Authors" ] ], "structured_content_str": [ "{\"title\": \"Reply to Reviewer n456 (Part 3)\", \"comment\": \"``Minor Point 2: The advantage of using semantic tokens as an intermediate representation needs justification (e.g., direct text2acoustic without semantics) should be justified.``\\n\\nThank you for this constructive suggestion. We have investigated this issue thoroughly and found several key advantages of our semantic token approach:\\n\\n1. Performance Superiority: Recent works attempting to model continuous features like Mel-spectrograms directly without phone-level duration [3] show lower similarity compared to MaskGCT. Our experiments demonstrate that directly mapping text to multi-layer acoustic tokens using masked generative models leads to convergence difficulties, resulting in poor intelligibility and high WER.\\n\\n2. Training Stability: Both [3,5] report convergence issues on small datasets when bypassing intermediate representations. This aligns with our early experimental findings.\\n\\n3. Empirical Evidence: We conducted a comparative study between MaskGCT and a direct text-to-acoustic approach (implemented by removing semantic token conditioning and adding text conditioning to our semantic-to-acoustic model) on a 10K-hour subset. The results clearly show that direct acoustic token prediction from text faces convergence challenges, yielding lower SIM scores and substantially higher WER. This demonstrates that our two-stage approach effectively reduces the overall modeling complexity.\\n\\nWe are currently extending these experiments to the full dataset and will include comprehensive results in the revised paper.\\n\\n| Model | SIM-O \\u2191 | WER \\u2193 |\\n|-------|----------|---------|\\n||**SeedTTS *test-en***|\\n| Text-to-Acoustic (Emilia 10K hours) | 0.651 | 12.75 |\\n| MaskGCT (Emilia 10K hours) | **0.719** | **2.872** |\\n|| **SeedTTS *test-zh*** |\\n| Text-to-Acoustic (Emilia 10K hours) | 0.727 | 17.08 |\\n| MaskGCT (Emilia 10K hours) | **0.762** | **3.302** |\\n\\n``Minor Point 3: Section 4.2.3's purpose and motivation need clarification, particularly if addressing speed modification, as specialized tools exist for this purpose.``\", \"this_section_aims_to_demonstrate_two_key_aspects_of_our_model\": \"1. Generation Capability: MaskGCT can produce high-quality speech across a wide range of durations while maintaining natural prosody. This is fundamentally different from simple speech rate adjustment, which often introduces artifacts in prosody, pitch, and timbre.\\n\\n2. Control and Diversity: Our approach offers better controllability over speech generation compared to AR models, while ensuring diverse outputs. We have provided audio samples on our demo page that showcase natural prosodic variations across different durations.\\n\\n``Minor Point 4: Additional efficiency metrics beyond inference steps (e.g., latency) should be considered.``\\n\\nThanks for your suggestion. We provide the real-time factor (RTF) of MaskGCT on an A100 GPU for generating a 20-second speech across various inference steps in Table. Across all configurations presented, there is no significant performance difference. Additionally, we also present the RTF of AR + SoundStorm. For AR + SoundStorm, generating a 20-second speech requires 20 * 50 = 1000 steps for text-to-semantic inference. However, we can leverage kv-cache to accelerate the process.\\n\\n| Model | T2S steps | S2A steps | RTF |\\n|-------|-----------|-----------|-----|\\n| MaskGCT | 50 | [40, 16, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] | 0.52 |\\n| MaskGCT | 50 | [10, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] | 0.44 |\\n| MaskGCT | 25 | [10, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] | 0.31 |\\n| AR + SoundStorm | 1000 | [40, 16, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] | 0.98 |\\n\\nThanks again for your constructive comments. We would be grateful if we could hear your feedback regarding our answers to the reviews. We would be happy to answer and discuss if you have further comments.\\n\\n[1] Li X, Liu S, Lam M W Y, et al. Diverse and expressive speech prosody prediction with denoising diffusion probabilistic model[J]. arXiv preprint arXiv:2305.16749, 2023.\\n\\n[2] He H, Shang Z, Wang C, et al. Emilia: An extensive, multilingual, and diverse speech dataset for large-scale speech generation[J]. arXiv preprint arXiv:2407.05361, 2024.\\n\\n[3] Chen Y, Niu Z, Ma Z, et al. F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching[J]. arXiv preprint arXiv:2410.06885, 2024.\\n\\n[4] A^2-Flow: Alignment-Aware Pre-training for Speech Synthesis with Flow Matching https://openreview.net/attachment?id=e2p1BWR3vq&name=pdf\"}", "{\"summary\": \"This paper presents MASKGCT, a compact Non-Autoregressive (NAR) text-to-speech model. This model is meticulously designed with a specific emphasis on attaining outstanding audio quality. What sets it apart from prior works is its utilization of semantic tokens extracted from the SSL representation using VQVAE and its avoidance of relying on explicit alignment information.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper offers quite interesting and remarkably high-quality demo audio. The audio samples provided within the paper not only demonstrate clear and distinct sound qualities but also exhibit a certain level of innovation in terms of the audio characteristics they present. They are able to effectively capture the attention of the readers and give a practical sense of the research outcomes in the audio domain.\", \"The proposed pipeline is significantly more compact than previous ones such as NaturalSpeech3. This compactness implies a more streamlined and efficient design. It potentially leads to reduced computational complexity and may offer advantages in terms of implementation and resource utilization.\"], \"weaknesses\": [\"The Word Error Rate (WER) in Table 4 and 5 of all methods is much higher than previously reported. This discrepancy requires more in-depth explanation. It is essential to analyze the factors contributing to this higher WER, such as possible differences in the data sets used, the experimental settings, or any unique characteristics of the methods employed in this study. A detailed exploration and clarification of these aspects would enhance the understanding and validity of the results presented.\", \"In Section 3.2.1, the authors utilize VQVAE to obtain discrete semantic tokens instead of k-means. However, no corresponding experiment result is provided. It would be beneficial to include the experimental results related to the use of k-means for a more comprehensive understanding. These results could demonstrate the effectiveness and performance of VQVAE in this context, and also facilitate a comparison with the potential outcomes if k-means had been used. This would add more substance to the discussion and support the authors' choice of using VQVAE.\", \"To my knowledge, using text as a condition without expanding by duration for a Non-Autoregressive (NAR) model was first proposed in e2-tts. The authors should point out this in Section 3.2.2. Acknowledging the prior work in this area is important for providing a complete context and showing the evolution of the research. By highlighting the connection to e2-tts, the authors can better position their work within the existing body of knowledge and give credit to the relevant pioneering research. It also helps readers to better understand the background and significance of the approach taken in this study.\"], \"questions\": \"Address my concerns in the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors present a zero-short text-to-speech (TTS) system called Masked Generative Codec Transformer (MaskGCT), which comprises two stages: 1) predicting semantic tokens from text and 2) using these semantic tokens to generate acoustic tokens. Both stages leverage a non-autoregressive masked generative transformer with specific prompt tokens for in-context learning. A VQ-VAE codec and a residual vector quantization (RVQ) codec are separately trained to obtain the semantic tokens and acoustic tokens. Extensive experiments across various tasks demonstrate that MaskGCT, trained on 100k hours of speech data, surpasses existing state-of-the-art zero-shot TTS systems in quality, similarity, and intelligibility.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This work utilizes non-autoregressive masked generative transformers to generate both sematic tokens and acoustic tokens, offering advantages such as flexible length control, better robustness, and higher inference efficiency. This is suggested by the experiments in Section 4.2.2, comparing to the system replacing the first stage with an autoregressive model (i.e., AR + SoundStorm).\", \"The authors conduct extensive experiments on a variety of tasks to evaluate the effectiveness and sca of the proposed system, including not only standard zero-shot TTS, but also cross-lingual dubbing, voice conversion, emotion control, and speech content editing, with potential use of post-training and fine-tuning.\", \"The selected state-of-the-art baselines are well-chosen.\"], \"weaknesses\": \"- The novelty of this work is somewhat incremental, though I would appreciate its engineering contributions. This work follows a two-stage codec-based sequence-to-sequence strategy, converting text into semantic tokens and then into acoustic tokens, which is in line with SPEAR-TTS [1]. The SoundStorm [2] paper also explores this strategy. Additionally, the idea of applying non-autoregressive masked generative transformer for codec-based TTS is presented in SoundStorm [2] and NaturalSpeech3 [3] (though the NaturalSpeech3 paper describes it from a diffusion perspective.). However, I would like to highlight the distinction that this work employs non-autoregressive masked generative transformers in both stages of the two-stage framework.\\n- The authors emphasize that the proposed method does not require text-speech alignment supervision or phone-level duration prediction. However, it appears that the method does require a specified length for the target semantic token sequence, thereby to control the length of the generate speech. The presented experimental results in Table 2, comparing to baseline methods, are based on either ground-truth duration or predicted duration. As descibed in the paper, the authors train a duration predictor to estimate the phone-level durations, which are then summed to determine the total duration of the target speech. This process relies on pre-computed ground-truth phone-level durations. Therefore, it seems that this may not fully align with the claim of not requiring text-speech alignment supervision or phone-level duration prediction. \\n- The authors claim that training a VQ-VAE model to quantize the semantic representation can reduce the information loss, compared to using k-means clustering as in previous works. I think it would be better if the authors can provide ablation results or relevant references to support this assertion. \\n\\n[1] Kharitonov, E., Vincent, D., Borsos, Z., Marinier, R., Girgin, S., Pietquin, O., ... & Zeghidour, N. (2023). Speak, read and prompt: High-fidelity text-to-speech with minimal supervision. Transactions of the Association for Computational Linguistics, 11, 1703-1718.\\n\\n[2] Borsos, Z., Sharifi, M., Vincent, D., Kharitonov, E., Zeghidour, N., & Tagliasacchi, M. (2023). Soundstorm: Efficient parallel audio generation. arXiv preprint arXiv:2305.09636.\\n\\n[3] Ju, Z., Wang, Y., Shen, K., Tan, X., Xin, D., Yang, D., ... & Zhao, S. (2024). Naturalspeech 3: Zero-shot speech synthesis with factorized codec and diffusion models. arXiv preprint arXiv:2403.03100.\", \"questions\": [\"In Table 1, does \\\"ZS TTS\\\" denote \\\"zero-shot TTS\\\", and what's \\\"CL TTS\\\"? For \\\"Imp. Dur.\\\", I also don't fully agree that this work implicitly models duration, as it requires a specified length of the target generated speech.\", \"In Section 3.2.3, why \\\"the number of frames in the semantic token sequence is equal to the sum of the frames in the prompt acoustic sequence and the target acoustic sequence\\\"?\", \"Is the output length of the text-to-semantic stage equal to that from the semantic-to-acoustic stage? From Figure 2, the former seems shorter than the latter?\", \"In the first paragraph of Section 2, about \\\"SpearTTS utilizes three AR models to predict semantic tokens from text, coarse-grained acoustic tokens from semantic tokens, and fine-grained acoustic tokens from coarse-grained tokens.\\\", the terms \\\"coarse-grained\\\" and \\\"fine-grained\\\" do not seem to be mentioned in the SpearTTS [1] paper, what do the authors mean by them?\", \"What's the length of prompt audio? Does it affect the quality of the generated speech?\", \"[1] Kharitonov, E., Vincent, D., Borsos, Z., Marinier, R., Girgin, S., Pietquin, O., ... & Zeghidour, N. (2023). Speak, read and prompt: High-fidelity text-to-speech with minimal supervision. Transactions of the Association for Computational Linguistics, 11, 1703-1718.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"1. For novelty.\\n\\nThe authors state \\\"While this approach was originally developed for image generation (MaskGIT), its adaptation to TTS presents unique challenges and opportunities.\\\" \\u201cAdaptation\\u201d reflects the limited novelty of the work. While MaskGCT introduces VQ-based semantic tokens, this seems to be more of an incremental modification better suited for ablation studies. Overall, there appears to be no significant differentiation between this work and SpearTTS. The method is an A+B style work, which other reviewers have also noted as a concern. For instance, we could take an existing paper like VoiceBox and replace one of its modules with a new model proposed in computer vision to potentially achieve improvements. While the engineering contribution is acknowledged, the novelty contribution is really poor. \\n\\n2. Regarding Duration Modeling:\\n\\nIn TTS, we primarily evaluate zero-shot inference results, which don't require pre-extracted phoneme duration. Therefore, this isn't necessarily a drawback in NaturalSpeech3. While the authors mention \\\"the lack of efficient tools for phone-level duration extraction,\\\" using MFA for offline extraction, despite not being 100% accurate, remains a viable approach. Furthermore, it's unclear whether the improvement in naturalness stems from the removal of duration prediction, as noted in paper [2]. If the authors intend to demonstrate that duration-based methods are inferior to AR-based methods for large-scale TTS work, additional metrics and experiments are needed. \\n\\nFor example, replacing AR modules with duration-based methods would provide direct performance comparisons. Additionally, you should analyze how AR models implicitly handle speech-text alignment and develop metrics to evaluate this alignment quality (either via ground truth or phoneme-based tasks). A key question to address is: Do AR-based methods perform better simply because they achieve better alignment, or are other factors at play? The current work doesn't fully explain why duration models perform worse, making it difficult to understand the true advantages of AR-based approaches. Understanding these mechanisms would not only validate the authors' claims but also provide valuable guidance for future TTS system design.\\n\\n3. VQ semantics vs Kmeans semantics.\\n\\nThe reconstruction loss and semantic-to-acoustic generation metrics are not sufficient evidence to conclude that VQ contains more semantic information than K-means. These metrics primarily measure global patterns, while semantics are fundamentally local patterns. The definition of semantics used in this work needs clarification. Semantics cannot be reduced to good reconstruction quality or high WER scores in speech recognition. Rather, semantics refers to the clear delineation of linguistic units such as words, syllables, and phoneme boundaries. Local metrics should be developed to properly evaluate these semantic properties. For better understanding of semantics, please refer to the DINO paper (https://arxiv.org/abs/2104.14294). Additional relevant references in speech include:\", \"https\": \"//arxiv.org/abs/2310.10803\\n\\nOverall, my concerns have not been adequately addressed. The paper would be substantially stronger with more rigorous statements, detailed explanations, and the development of appropriate metrics.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your feedback. We greatly appreciate your recognition of our responses. Regarding Weakness 4, our experiments on accent and emotion were designed to demonstrate the capabilities of MaskGCT in style imitation. The results show that MaskGCT's CMOS, SMOS, Accent (Emotion) MOS all surpass all baselines. For WER, in fact, our experiments indicate that the ground truth speech on these two datasets also has a higher WER than the previous results, which are 10.90 and 11.79, respectively. The results of MaskGCT (6.382 and 12.502) show no significant difference compared to the ground truth, and are very close to the best results among all baseline systems.\\n\\nIf you have more suggestions to help us improve the paper, please feel free to let us know. We are more than willing to address any further concerns you may have. Thank you very much!\"}", "{\"comment\": \"Thanks again for your feedback. Since the rebuttal phase is nearing its end, we would appreciate it if you could let us know whether our further reply has addressed your concern. If you have any further suggestions or concerns, we are more than willing to provide more discussion as soon as possible.\"}", "{\"title\": \"Reply to Reviewer QWox (Part 2)\", \"comment\": \"``Weakness 3: The semantic tokens are not well disentangled from speaker information, making MaskGCT less effective for voice conversion tasks.``\\n\\nSince MaskGCT was primarily designed for Zero-Shot TTS tasks, we did not extensively focus on decoupling speaker information from semantic tokens. However, in our recent research, we have enabled voice conversion capabilities through a simpler approach in the semantic-to-acoustic model. Specifically, we utilize a lightweight yet efficient voice conversion model (such as OpenVoice), to perform real-time voice conversion on the target speech using randomly sampled prompt speech, thereby achieving timbre perturbation. These perturbed semantic tokens are then used as input to predict target acoustic tokens with the prompt in the semantic-to-acoustic model. The results are presented below. **We will incorporate these details in the revised version of the paper.** We use the VCTK dataset to evaluate our system, randomly selecting 200 samples as source speech. For each sample, we randomly select another sample from the same speaker as the prompt speech.\\n\\n| Model | SIM-O(\\u2191) | WER(\\u2193) | DNSMOS(\\u2191) | NISQA(\\u2191) |\\n|-------|-----------|---------|------------|-----------|\\n| HireSpeech++ [1] | 0.379 | 4.87 | 3.402 | 3.794 |\\n| LM-VC [2] | 0.286 | 8.35 | 3.457 | 3.927 |\\n| UniAudio [3] | 0.249 | 9.00 | 3.472 | 4.279 |\\n| MaskGCT-VC | **0.532** | **4.49** | **3.510** | **4.469** |\\n\\n``Weakness 4: MaskGCT shows lower pronunciation accuracy compared to VoiceBox/NaturalSpeech 3 in single-sample generation.``\\n\\nI believe the reason MaskGCT outperforms NaturalSpeech 3 and VoiceBox in terms of WER metrics may be due to ASR's tendency to more accurately recognize standard speech. This is because NS3 and VoiceBox incorporate phone-level duration predictors, and the LibriSpeech dataset, being a more standardized recording test set, allows NS3 and VoiceBox to predict durations more effectively. However, for CMOS, MaskGCT performs better than VoiceBox.\\n\\n``Weakness 5: The objective metrics appear to be worse than seed-TTS baseline.``\\n\\nAs far as I know, Seed-TTS utilizes over 2 million hours of training data, which is 20 times the amount used by our model. Consequently, it is challenging for our model to achieve a fair comparison in performance with Seed-TTS.\\n\\n## Questions\\n\\n``Q1: It would be helpful to mention the frame rate of semantic and acoustic tokens.``\\n\\n\\nAs detailed in Appendix A.4, our model operates with two types of tokens: semantic tokens (16KHz sampling rate, 320 hopsize) and acoustic tokens (24KHz sampling rate, 480 hopsize). Both tokens effectively maintain a consistent frame rate of 50Hz. We will move this important technical detail to the main text for better clarity.\\n\\n``Q2: I am curious whether only a single sampling step is used from the third layer during acoustic token generation. Is there any difference when using more sampling steps?``\\n\\nGiven that the first layer of acoustic tokens carries the majority of information, we allocate more inference steps to this layer while reducing steps for subsequent layers. As demonstrated in Appendix A.3, we have analyzed how varying the number of inference steps in the second-stage model affects overall performance. Our experiments show that even a minimal configuration [10, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] maintains comparable performance, with only marginal decreases in SIM and WER metrics.\\n\\n``Q3: It would be beneficial to also present the overall sampling speed of the model, including the Real-Time Factor.``\\n\\nThanks for your suggestion. **We will also add it to the revised version of the paper.** We present the real-time factor (RTF) of MaskGCT on an A100 GPU for generating a 20-second speech across various inference steps in Table. Across all configurations presented, there is no significant performance difference. Additionally, we also present the RTF of AR + SoundStorm. For AR + SoundStorm, generating a 20-second speech requires 20 * 50 = 1000 steps for text-to-semantic inference. However, we can leverage kv-cache to accelerate the process.\\n\\n| Model | T2S steps | S2A steps | RTF |\\n|-------|-----------|-----------|-----|\\n| MaskGCT | 50 | [40, 16, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] | 0.52 |\\n| MaskGCT | 50 | [10, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] | 0.44 |\\n| MaskGCT | 25 | [10, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] | 0.31 |\\n| AR + SoundStorm | 1000 | [40, 16, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] | 0.98 |\"}", "{\"title\": \"Paper Update\", \"comment\": \"## Additional Comparisons\", \"we_have_conducted_extensive_comparative_experiments_to_provide_a_more_comprehensive_understanding_of_maskgct\": [\"1. Efficiency Analysis: Compared RTF (Real-Time Factor) between MaskGCT under various inference parameters and AR models\", \"2. Small Dataset Performance: Evaluated MaskGCT's effectiveness when trained on limited data\", \"3. Architecture Comparison: Conducted comparative studies between MaskGCT and direct text-to-acoustic models\", \"4. Voice Conversion Enhancement: Introduced a novel approach to improve MaskGCT's voice conversion capabilities, with comparative evaluations against baseline methods\", \"## Revised Paper\", \"In addition, **we have uploaded a revised version of the paper with all changes highlighted in blue**. The key updates include:\", \"Fixed a typo in the system comparison figure and moved it to the Appendix (page 20, start at line 1050)\", \"Added P-Flow reference in the related work section (page 2, line 101)\", \"Added analysis of semantic representation codecs (VQ vs. k-means, vocabulary size) in Section 4.4 (page 9, start at line 466)\", \"Included RTF measurements under different settings in Appendix A.3 (page 13, start at line 938)\", \"Added performance comparison on small datasets in Appendix K: Discussion about Concurrent Works (page 24, start at line 1290)\", \"Added performance comparison between MaskGCT and direct text-to-acoustic model in Appendix K: Discussion about Concurrent Works (page 25, start at line 1307)\", \"Added new voice conversion improvements in Appendix I: Voice Conversion (page 23, start at line 1236)\", \"Included explanation for WER variations in Section 4.3: Speech Style Imitation (page 9, start at line 441)\", \"Detailed two approaches for total duration calculation in Appendix A.5 (page 19, start at line 1025)\", \"All modifications are clearly marked in the revised manuscript. We welcome any additional questions or feedback.\"]}", "{\"comment\": \"Thanks for your comments. Here are our replies for your remaining concerns.\\n\\n**novelty**\\n\\nMasked generative models have been proven to be a class of versatile and powerful generative models, like diffusion models, which have been applied in various fields such as images, videos, and audio. However, how to leverage masked generative models for better TTS is worth exploring. Previous works, such as SoundStorm and NaturalSpeech 3, have also used masked generative modeling, but they both require frame-level conditioning: NaturalSpeech 3 relies on traditional phone-level duration predictors, while SoundStorm is based on AR. A key question MaskGCT explores is whether NAR TTS models can eliminate the dependency on phone-level duration supervision. Additionally, MaskGCT investigates VQ-based semantic tokens, and in our supplementary experiments, we also explore the differences between two-stage models and direct text-to-acoustic modeling with masked generative modeling.\\n\\nWe also illustrate the differences between MaskGCT and traditional NAR (Non-Autoregressive) methods in text-speech alignment modeling through a figure (in this [https://raw.githubusercontent.com/maskgct/maskgct/refs/heads/main/aatn_map.png](https://raw.githubusercontent.com/maskgct/maskgct/refs/heads/main/aatn_map.png)). For traditional NAR TTS systems, a duration predictor and length regulator are first used to obtain frame-level conditions that provide alignment information. For MaskGCT, similar to AR (Autoregressive) methods, we concatenate the text in front of the speech tokens through in-context learning, and the model implicitly learns speech-text alignment through self-attention.\\n\\n**Regarding Duration Modeling**\\n\\nFor models like NaturalSpeech 3 and VoiceBox that rely on in-context learning with a prompt mechanism, extracting the duration of prompt phones is required during inference. While we do not claim that this is necessarily a drawback in NaturalSpeech 3, it does introduce prior knowledge and additional modules, which are worth exploring to determine if they can be eliminated. I believe eliminating more priors and redundant modules is not only a development direction in the TTS field but also a general trend in most domains. The experiment results show that our method outperforms or matches previous AR and NAR (which require phone-level duration) in terms of metrics. Compared to NAR models that require phone-level duration, our method is simpler, does not need phone-level duration, and achieves better SIM, CMOS, and SMOS. Compared to AR models that also do not require phone-level duration, our method has fewer inference steps, is faster, and achieves better SIM, WER, CMOS, and SMOS. In addition, we provide attention maps at different stages of inference (steps 1, 10, 20) and across various layers of the model (layer 1, layer 6, layer 16) in this [https://raw.githubusercontent.com/maskgct/maskgct/refs/heads/main/ar_vs_nar_vs_maskgct.png](https://raw.githubusercontent.com/maskgct/maskgct/refs/heads/main/ar_vs_nar_vs_maskgct.png). These attention maps demonstrate that our model implicitly learns the alignment between text and speech. It shows that the model handles the alignment between speech tokens and text in the middle layers of the network, while in the deeper layers, the model has already resolved the alignment and focuses on processing the tokens.\"}", "{\"comment\": \"Thank you again for your great efforts and the valuable comments.\\n\\nWe have carefully addressed the main concerns in detail. We hope you might find the response satisfactory. As the paper discussion phase is coming to an end, we would be grateful if we could hear your feedback regarding our answers to the reviews. We will be very happy to clarify any remaining points (if any).\"}", "{\"title\": \"Reply to Reviewer n456 (Part 2)\", \"comment\": \"``Weakness 5: The claim about information loss in k-means-based discrete SSL units compared to VQ-VAE needs better substantiation.``\\n\\nThank you for your valuable suggestion. This question has also been raised by other reviewers, and we believe a thorough investigation of this matter will significantly strengthen our paper. We will first present our empirical findings (which are already included in our paper), followed by additional experimental results. **All these analyses will be incorporated into the revised version of our paper.**\\n\\nIn our initial experiments, we observed that k-means-based semantic tokens were less effective in predicting acoustic tokens for languages with rich prosodic variations, particularly Chinese, where significant pitch variations were observed. We further support this finding with qualitative analysis.\\n\\n1. Since k-means can be seen as optimizing the reconstruction loss between input and reconstruction, we present reconstruction loss curves under different k-means and VQ configurations.\", \"we_compare_four_configurations\": \"VQ-8192 (which is the same as in our paper), VQ-2048, k-means-8192, and k-means-2048.\\nThe loss curves can be found at this [link](https://github.com/maskgct/maskgct/raw/refs/heads/main/recon_loss_for_vq.PNG).\\n\\n2. The information preservation in semantic tokens directly affects the semantic-to-acoustic model's prediction performance. We present the top-10 accuracy (shown in this [link](https://raw.githubusercontent.com/maskgct/maskgct/refs/heads/main/soundstorm_layer1_acc.PNG)) of the semantic-to-acoustic model in predicting the first layer of acoustic tokens. The results demonstrate that VQ-8192 outperforms VQ-2048, which in turn outperforms k-means-8192.\\n\\n3. We investigate the impact of different semantic representation approaches on acoustic token reconstruction. We train separate semantic-to-acoustic (S2A) models for each configuration and evaluate their performance through speech reconstruction metrics. Across all three test sets, the results reveal a consistent performance ranking in similarity (SIM) scores, with VQ-8192 yielding the highest performance, followed sequentially by VQ-2048, k-means-8192, and k-means-2048. For WER, VQ-based methods also demonstrate superior performance over k-means approaches, though this advantage is less pronounced on LibriSpeech test-clean. Notably, on SeedTTS test-zh (Chinese test set), k-means exhibits a substantial degradation in WER. We attribute this to the stronger coupling between Chinese semantics and prosodic features, where the transition from VQ to k-means results in significant loss of prosodic information in the semantic representations.\\n\\n| Semantic Codec | LibriSpeech *test-clean* | | SeedTTS *test-en* | | SeedTTS *test-zh* | |\\n|--------------|----------|---------|----------|---------|----------|---------|\\n| | SIM-O \\u2191 | WER \\u2193 | SIM-O \\u2191 | WER \\u2193 | SIM-O \\u2191 | WER \\u2193 |\\n| k-means 2048 | 0.648 | 3.013 | 0.658 | 3.989 | 0.691 | 11.420 |\\n| k-means 8192 | 0.661 | 2.862 | 0.664 | 3.012 | 0.713 | 8.782 |\\n| VQ 2048 | *0.671* | **2.177** | *0.692* | *3.187* | *0.744* | *4.913* |\\n| VQ 8192 | **0.680** | *2.223* | **0.713** | **2.175** | **0.763** | **2.088** |\\n\\nWe believe these additions will strengthen our argument and offer a clearer understanding of the impact of different semantic token approaches on model performance.\\n\\n``Minor Point 1: Figure 2 would benefit from additional notations for clarity. Also, what is the difference between prompt semantic tokens and target semantic tokens? Can you give an example?``\\n\\nThe prompt serves as a reference for style, prosody, and speaker characteristics, enabling zero-shot TTS through in-context learning. During training, we randomly select a prefix from each sample as the prompt, keeping its semantic tokens unmasked. The model learns to predict masked tokens by conditioning on both the prompt's semantic tokens and the input text. For inference, we construct the input sequence by concatenating [prompt text, target text, prompt semantic tokens], which guides the model in predicting the target tokens.\"}", "{\"comment\": \"Thanks again for your valuable comments, we would be grateful if we could hear your feedback regarding our answers. We would be happy to answer and discuss if you have further comments.\"}", "{\"comment\": \"Thanks again for your valuable comments, we would be grateful if we could hear your feedback regarding our answers. We would be happy to answer and discuss if you have further comments.\"}", "{\"comment\": \"Thanks again for your valuable comments, we would be grateful if we could hear your feedback regarding our answers. We would be happy to answer and discuss if you have further comments.\"}", "{\"title\": \"Thanks for response\", \"comment\": \"I believe this paper is overall well-completed, and I will maintain my score at 6.\"}", "{\"comment\": \"The authors have made a lot of effort to address various comments, and I appreciate their work in resolving them. For weakness 2, they have clearly demonstrated that the VQ-VAE approach is superior to the k-means approach from the TTS perspective, which was previously overlooked. I also confirmed that updates related to voice conversion have been made in the paper, and it seems that weakness 5 and other questions have also been adequately addressed.\\n\\nHowever, regarding weakness 4, I am skeptical that the reason for MaskGCT's poor performance is that ASR aligns more easily with standard speech. When the ground truth length is used, MaskGCT also performs well, suggesting that the issue might not lie in ASR's tendencies but rather in occasional failures of the text-to-semantic token conversion. Regardless of the exact cause, based on the results presented in the paper and the rebuttal, I intend to maintain my score at 6.\"}", "{\"comment\": \"Thank you again for your great efforts and the valuable comments.\\n\\nWe have carefully addressed the main concerns in detail. We hope you might find the response satisfactory. As the paper discussion phase is coming to an end, we would be grateful if we could hear your feedback regarding our answers to the reviews. We will be very happy to clarify any remaining points (if any).\"}", "{\"title\": \"Reply to Reviewer EzBH (Part 2)\", \"comment\": \"``Weakness 3: should acknowledge e2-tts as the first work using unexpanded text conditioning in NAR TTS.``\\n\\nThank you for highlighting this important point. We would like to clarify our treatment of this topic:\\n\\nThank you for pointing out a very important point. In our related works, we mention that \\\"SimpleSpeech, DiTTo-TTS, and E2-TTS [1] are also NAR-based models that do not require precise alignment information between text and speech, nor do they predict phone-level duration,\\\" and discuss the differences between our model and theirs in Appendix K. E2-TTS and its recent open-source version [2] have demonstrated superior performance in large-scale training. However, recent work [2, 3] also points out that it struggles to converge on small datasets. We conduct a similar experiment, directly predicting acoustic codes from text, and observed similar phenomena. We provide some results below. The results show that directly predicting acoustic codes from text is challenging to converge, resulting in lower SIM and significantly higher WER. Of course, this approach differs from E2-TTS in several aspects. We conduct this investigation primarily to demonstrate that our two-stage model potentially reduces the overall modeling complexity.\\n\\n| Model | SIM-O \\u2191 | WER \\u2193 |\\n|-------|----------|---------|\\n||**SeedTTS *test-en***|\\n| Text-to-Acoustic (Emilia 10K hours) | 0.651 | 12.75 |\\n| MaskGCT (Emilia 10K hours) | **0.719** | **2.872** |\\n|| **SeedTTS *test-zh*** |\\n| Text-to-Acoustic (Emilia 10K hours) | 0.727 | 17.08 |\\n| MaskGCT (Emilia 10K hours) | **0.762** | **3.302** |\\n\\nThanks again for your constructive comments. We would be grateful if we could hear your feedback regarding our answers to the reviews. We would be happy to answer and discuss if you have further comments.\\n\\n[1] Eskimez S E, Wang X, Thakker M, et al. E2 TTS: Embarrassingly easy fully non-autoregressive zero-shot TTS[J]. arXiv preprint arXiv:2406.18009, 2024.\\n\\n[2] Chen Y, Niu Z, Ma Z, et al. F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching[J]. arXiv preprint arXiv:2410.06885, 2024.\\n\\n[3] A^2-Flow: Alignment-Aware Pre-training for Speech Synthesis with Flow Matching https://openreview.net/attachment?id=e2p1BWR3vq&name=pdf\"}", "{\"comment\": \"Thank you for your response. While Reviewer n456's comments on novelty might be overly critical, the authors have provided excellent video demonstrations, particularly the one featuring Black Myth. However, considering the overall quality of the paper, I believe maintaining my current score is appropriate.\"}", "{\"title\": \"Reply to Reviewer AvMh (Part 1)\", \"comment\": \"First of all, we want to thank the reviewer for your careful reading and providing a lot of constructive comments! Additionally, we appreciate the acknowledgment of the impact of our work on the community. Below we address the concerns mentioned in the review.\\n\\n## Weaknesses\\n``Weakness 1: The novelty is incremental as it mainly follows SPEAR-TTS's two-stage strategy and SoundStorm/NaturalSpeech3's masked generation approach, though it uniquely applies non-AR masked generation to both stages.``\\n\\nWe appreciate your recognition of our engineering contributions and **commit to open-sourcing all code and weights to benefit the research community**. We would like to highlight MaskGCT's significant contributions at both representation and modeling levels:\\n\\n1. At the modeling level, MaskGCT innovatively applies masked generative modeling to both text-to-semantic and semantic-to-acoustic generation. While this approach was originally developed for image generation (MaskGIT), its adaptation to TTS presents unique challenges and opportunities. Notably, MaskGCT achieves non-autoregressive TTS without requiring explicit phone-speech alignment supervision or phone-level duration prediction, which significantly simplifies the traditional TTS pipeline while achieving more natural and consistent generation results.\\n\\n2. At the representation level, MaskGCT introduces VQ-based semantic tokens, which represents a distinct approach compared to previous methods that relied on k-means clustering.\\n\\n``Weakness 2: The claim of not requiring text-speech alignment or duration prediction seems inconsistent, as the method still needs target sequence length prediction based on phone-level durations.``\\n\\nIt is noteworthy that there are several methods to determine the total duration length. Our trained duration predictor is solely for providing a rough estimate to facilitate inference and comparison. Alternatively, a simpler approach could be: $\\\\textit{target total duration} = \\\\textit{target phone number} \\\\times \\\\frac{\\\\textit{prompt total duration}}{\\\\textit{prompt phone number}}$. Additionally, one could train a regression-based model to directly predict the total duration using prompt text, target text, and prompt speech as inputs, as proposed in [1]. In Section 4.2.3 and our demo pages, we also demonstrate that our method can generate satisfactory results within a reasonable range of total durations.\\n\\nThe table below illustrates the comparative results of MaskGCT under three different total duration calculation methods. The results indicate that our model, using simple rules to predict total duration, can generate speech with SIM and WER that are essentially comparable to those of the ground truth.\\n\\n| Method | SIM-O \\u2191 | WER \\u2193 |\\n|--------|----------|---------|\\n|| **LibriSpeech *test-clean*** ||\\n| *rule-based* | 0.686 | 2.976 |\\n| *duration predictor* | 0.687 | 2.634 |\\n| *gt length* | 0.697 | 2.012 |\\n|| **SeedTTS *test-en*** ||\\n| *rule-based* | 0.719 | 2.712 |\\n| *duration predictor* | 0.717 | 2.623 |\\n| *gt length* | 0.728 | 2.466 |\\n|| **SeedTTS *test-zh*** ||\\n| *rule-based* | 0.771 | 2.409 |\\n| *duration predictor* | 0.774 | 2.273 |\\n| *gt length* | 0.777 | 2.183 |\"}", "{\"comment\": \"First of all, we would like to express our gratitude to all the reviewers for their invaluable feedback, which has significantly contributed to enhancing the quality of our paper. Here, we summarize some points raised by multiple reviewers. For each reviewer, we address their specific concerns in the respective comments section.\\n\\n## The novelty of MaskGCT\\n\\nWe sincerely appreciate multiple reviewers' recognition of our engineering contributions. We have provided detailed model architectures and training specifications in the appendix, and we commit to open-sourcing all checkpoints and code to further benefit the research community.\\n\\nMaskGCT represents the first NAR TTS system trained on large-scale in-the-wild datasets without phone-level duration prediction. Notably, it achieves state-of-the-art performance across multiple evaluation metrics on various test sets, demonstrating superior capabilities in multilingual generation, accent handling, and emotional expression.\\n\\nIn addition, we also highlight that MaskGCT's novelty lies in two key aspects: (1) MaskGCT explores non-autoregressive TTS without requiring explicit phone-speech alignment supervision or phone-level duration prediction, which significantly simplifies the traditional TTS pipeline while achieving more natural and consistent generation results. (2) MaskGCT introduces VQ-based semantic tokens, which represents a distinct approach compared to previous methods that relied on k-means clustering.\\n\\n## The choice of semantic codec\\n\\nAll the reviewers have raised concerns about the choice of semantic codec (VQ vs. k-means). We believe a thorough investigation of this matter will significantly strengthen our paper. We will first present our empirical findings (which are already included in our paper), followed by additional experimental results. **All these analyses will be incorporated into the revised version of our paper.**\\n\\nIn our initial experiments, we observed that k-means-based semantic tokens were less effective in predicting acoustic tokens for languages with rich prosodic variations, particularly Chinese, where significant pitch variations were observed. We further support this finding with qualitative analysis.\\n\\n1. Since k-means can be seen as optimizing the reconstruction loss between input and reconstruction, we present reconstruction loss curves under different k-means and VQ configurations.\", \"we_compare_four_configurations\": \"VQ-8192 (which is the same as in our paper), VQ-2048, k-means-8192, and k-means-2048.\\nThe loss curves can be found at this [link](https://github.com/maskgct/maskgct/raw/refs/heads/main/recon_loss_for_vq.PNG).\\n\\n1. The information preservation in semantic tokens directly affects the semantic-to-acoustic model's prediction performance. We present the top-10 accuracy (shown in this [link](https://raw.githubusercontent.com/maskgct/maskgct/refs/heads/main/soundstorm_layer1_acc.PNG)) of the semantic-to-acoustic model in predicting the first layer of acoustic tokens. The results demonstrate that VQ-8192 outperforms VQ-2048, which in turn outperforms k-means-8192.\\n\\n2. We investigate the impact of different semantic representation approaches on acoustic token reconstruction. We train separate semantic-to-acoustic (S2A) models for each configuration and evaluate their performance through speech reconstruction metrics. Across all three test sets, the results reveal a consistent performance ranking in similarity (SIM) scores, with VQ-8192 yielding the highest performance, followed sequentially by VQ-2048, k-means-8192, and k-means-2048. For WER, VQ-based methods also demonstrate superior performance over k-means approaches, though this advantage is less pronounced on LibriSpeech test-clean. Notably, on SeedTTS test-zh (Chinese test set), k-means exhibits a substantial degradation in WER. We attribute this to the stronger coupling between Chinese semantics and prosodic features, where the transition from VQ to k-means results in significant loss of prosodic information in the semantic representations.\\n\\n| Semantic Codec | LibriSpeech *test-clean* | | SeedTTS *test-en* | | SeedTTS *test-zh* | |\\n|--------------|----------|---------|----------|---------|----------|---------|\\n| | SIM-O \\u2191 | WER \\u2193 | SIM-O \\u2191 | WER \\u2193 | SIM-O \\u2191 | WER \\u2193 |\\n| k-means 2048 | 0.648 | 3.013 | 0.658 | 3.989 | 0.691 | 11.420 |\\n| k-means 8192 | 0.661 | 2.862 | 0.664 | 3.012 | 0.713 | 8.782 |\\n| VQ 2048 | *0.671* | **2.177** | *0.692* | *3.187* | *0.744* | *4.913* |\\n| VQ 8192 | **0.680** | *2.223* | **0.713** | **2.175** | **0.763** | **2.088** |\\n\\nWe believe these additions will strengthen our argument and offer a clearer understanding of the impact of different semantic token approaches on model performance.\"}", "{\"comment\": \"Dear Reviewer, Once again, we would like to express our sincere gratitude for your valuable and constructive feedback. We would like to inquire whether our responses have further addressed your concerns. As the rebuttal phase is nearing its end, we would appreciate it if you could let us know if there are any further concerns or suggestions for improving our paper. We are more than willing to address any issues promptly. Thank you very much.\"}", "{\"title\": \"Reply to Reviewer EzBH (Part 1)\", \"comment\": \"First of all, we want to thank the reviewer for your careful reading and providing a lot of constructive comments! Below we address the concerns mentioned in the review.\\n\\n## Weaknesses\\n\\n``Weakness 1: The higher WER values in Tables 4 and 5 compared to previous reports need explanation.``\\n\\nWe find that one possible reason for the high WER could be that the ASR model has poor recognition capabilities for accents and emotional data. We discovered that the WER of the ground truth is also high. Thanks for your valuable suggestions, we will incorporate these discussions into the revised version of our paper.\\n\\n``Weakness 2: The choice of VQ-VAE over k-means for semantic tokenization needs experimental validation.``\\n\\nThank you for your valuable suggestion. This question has also been raised by other reviewers, and we believe a thorough investigation of this matter will significantly strengthen our paper. We will first present our empirical findings (which are already included in our paper), followed by additional experimental results. **All these analyses will be incorporated into the revised version of our paper.**\\n\\nIn our initial experiments, we observed that k-means-based semantic tokens were less effective in predicting acoustic tokens for languages with rich prosodic variations, particularly Chinese, where significant pitch variations were observed. We further support this finding with qualitative analysis.\\n\\n1. Since k-means can be seen as optimizing the reconstruction loss between input and reconstruction, we present reconstruction loss curves under different k-means and VQ configurations.\", \"we_compare_four_configurations\": \"VQ-8192 (which is the same as in our paper), VQ-2048, k-means-8192, and k-means-2048.\\nThe loss curves can be found at this [link](https://github.com/maskgct/maskgct/raw/refs/heads/main/recon_loss_for_vq.PNG).\\n\\n2. The information preservation in semantic tokens directly affects the semantic-to-acoustic model's prediction performance. We present the top-10 accuracy (shown in this [link](https://raw.githubusercontent.com/maskgct/maskgct/refs/heads/main/soundstorm_layer1_acc.PNG)) of the semantic-to-acoustic model in predicting the first layer of acoustic tokens. The results demonstrate that VQ-8192 outperforms VQ-2048, which in turn outperforms k-means-8192.\\n\\n3. We investigate the impact of different semantic representation approaches on acoustic token reconstruction. We train separate semantic-to-acoustic (S2A) models for each configuration and evaluate their performance through speech reconstruction metrics. Across all three test sets, the results reveal a consistent performance ranking in similarity (SIM) scores, with VQ-8192 yielding the highest performance, followed sequentially by VQ-2048, k-means-8192, and k-means-2048. For WER, VQ-based methods also demonstrate superior performance over k-means approaches, though this advantage is less pronounced on LibriSpeech test-clean. Notably, on SeedTTS test-zh (Chinese test set), k-means exhibits a substantial degradation in WER. We attribute this to the stronger coupling between Chinese semantics and prosodic features, where the transition from VQ to k-means results in significant loss of prosodic information in the semantic representations.\\n\\n| Semantic Codec | LibriSpeech *test-clean* | | SeedTTS *test-en* | | SeedTTS *test-zh* | |\\n|--------------|----------|---------|----------|---------|----------|---------|\\n| | SIM-O \\u2191 | WER \\u2193 | SIM-O \\u2191 | WER \\u2193 | SIM-O \\u2191 | WER \\u2193 |\\n| k-means 2048 | 0.648 | 3.013 | 0.658 | 3.989 | 0.691 | 11.420 |\\n| k-means 8192 | 0.661 | 2.862 | 0.664 | 3.012 | 0.713 | 8.782 |\\n| VQ 2048 | *0.671* | **2.177** | *0.692* | *3.187* | *0.744* | *4.913* |\\n| VQ 8192 | **0.680** | *2.223* | **0.713** | **2.175** | **0.763** | **2.088** |\\n\\n\\nThese comprehensive analyses demonstrate the advantages of VQ-based semantic tokens over k-means clustering, providing strong empirical evidence to support our design choices in MaskGCT.\"}", "{\"summary\": \"This paper introduces MaskGCT, a non-autoregressive (NAR) masked generative modeling method that sequentially generates audio from acoustic tokens, semantic tokens, and text. The semantic tokens are derived from VQ-VAE of W2V-BERT, a self-supervised speech learning model. The acoustic token approach extends DAC with improved training efficiency.\\n\\nResults demonstrate that MaskGCT either outperforms or matches current autoregressive (AR) based methods and NAR duration-based methods with publicly available implementation details, across multiple benchmarks: LibriSpeech, SeedTTS (Chinese and English), zero-shot TTS, and speech style (timbre, emotion, accent, etc.) imitation learning. Additionally, MaskGCT surpasses AR-based methods in both naturalness and computational efficiency (inference time steps).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper experimentally demonstrates that NAR (non-autoregressive) non-duration-based TTS methods outperform both AR (autoregressive) based and NAR duration-based TTS methods with publicly available implementation details.\", \"The open-source implementation and comprehensive reproducibility details provided in the appendix represent valuable contributions to the field.\", \"The paper is well-structured, with clear presentation and thorough technical descriptions.\"], \"weaknesses\": [\"Major Concerns:\", \"The novelty and technical contribution appear limited. The core methodology seems to be primarily a combination of existing work (MaskGIT and SpearTTS). The speech acoustic code is derived from DAC, with no apparent novel methodological contributions.\", \"Regarding NAR methods, the authors cite phone-level duration prediction as a source of complex model design. However, this claim needs clarification: How is this complexity measured (e.g., training time, loss balancing)? Why are AR-based methods considered less complex? Experience with NaturalSpeech3/Fastspeech2 suggests duration modeling is manageable and straightforward to implement. Does this conclusion apply to traditional small-scale TTS models? The statement appears overly generalized.\", \"The Related Work section overlooks prominent flow-based methods like VoiceBox and p-flow, which also employ masked generative models. Their exclusion from this category requires explanation.\", \"Table 1 would be more appropriate in the appendix. Multi-task capability alone is not a significant contribution, as multi-task frameworks are common. The focus should be on concrete improvements in naturalness, prosody, efficiency, or reduced complexity. Placing Table 1 prominently suggests multi-task functionality is the main contribution, which diminishes the paper's actual value.\", \"In line 210, the claim that k-means based discrete SSL units lead to information loss needs substantiation. Why is VQ-VAE of W2V-BERT considered superior for semantic preservation despite also being discrete SSL units? While Appendix B provides semantic and acoustic comparisons, direct evidence supporting this specific claim is lacking.\"], \"minor_points\": [\"Figure 2 would benefit from additional notations for clarity. Also, what is the difference between prompt semantic tokens and target semantic tokens? Can you give an example?\", \"While following the text-semantics-acoustics pipeline established by papers like SpearTTS, the superiority of this approach over alternatives (e.g., direct text2acoustic without semantics) should be justified.\", \"Section 4.2.3's purpose and motivation need clarification, particularly if addressing speed modification, as specialized tools exist for this purpose.\", \"Consider whether inference steps alone adequately measure efficiency, or if other metrics common in streaming methods (e.g., latency) would provide more comprehensive evaluation.\\\"\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer AvMh (Part 3)\", \"comment\": \"``Q3: Is the output length of the text-to-semantic stage equal to that from the semantic-to-acoustic stage? From Figure 2, the former seems shorter than the latter?``\\n\\nYes, they are the same. More precisely, the total length of the prompt semantic tokens and target semantic tokens is consistent with the total length of the prompt acoustic tokens and target acoustic tokens.\\n\\n``Q4: The terms \\\"coarse-grained\\\" and \\\"fine-grained\\\" do not seem to be mentioned in the SpearTTS paper, what do the authors mean by them?``\\n\\nIn our paper, \\\"coarse-grained\\\" refers to the first layer of acoustic tokens, while \\\"fine-grained\\\" refers to the remaining layers. This terminology aligns with AudioLM [2], which SpearTTS extends. AudioLM explicitly uses the terms \\\"Coarse acoustic modeling\\\" and \\\"Fine acoustic modeling\\\" in its framework.\", \"to_clarify_the_connection\": \"SpearTTS employs an autoregressive (AR) model to predict semantic codes from text, followed by another AR model for acoustic code prediction. As stated in the SpearTTS paper: \\\"Similar to AudioLM, it is possible to add an optional third stage, with the goal of improving the quality of the synthesized speech by predicting acoustic tokens corresponding to fine residual vector quantization levels.\\\"\\n\\n``Q5: What's the length of prompt audio? Does it affect the quality of the generated speech?``\", \"our_model_offers_considerable_flexibility_in_prompt_handling\": \"1. Training Strategy: We randomly select 0-50% of the input speech as the prompt during training, enabling the model to adapt to varying prompt lengths.\\n\\n2. Inference Guidelines: Since our training data consists of samples under 30 seconds, we recommend keeping prompt lengths below 20 seconds for optimal performance in practical applications.\\n\\nThanks again for your constructive comments. We would be grateful if we could hear your feedback regarding our answers to the reviews. We would be happy to answer and discuss if you have further comments.\\n\\n[1] Lee K, Kim D W, Kim J, et al. DiTTo-TTS: Efficient and Scalable Zero-Shot Text-to-Speech with Diffusion Transformer[J]. arXiv preprint arXiv:2406.11427, 2024.\\n\\n[2] Borsos Z, Marinier R, Vincent D, et al. Audiolm: a language modeling approach to audio generation[J]. IEEE/ACM transactions on audio, speech, and language processing, 2023, 31: 2523-2533.\"}", "{\"title\": \"Reply to Reviewer AvMh (Part 2)\", \"comment\": \"``Weakness 3: The claim about VQ-VAE's superiority over k-means clustering for semantic representation needs experimental validation.``\\n\\nThank you for your valuable suggestion. This question has also been raised by other reviewers, and we believe a thorough investigation of this matter will significantly strengthen our paper. We will first present our empirical findings (which are already included in our paper), followed by additional experimental results. **All these analyses will be incorporated into the revised version of our paper.**\\n\\nIn our initial experiments, we observed that k-means-based semantic tokens were less effective in predicting acoustic tokens for languages with rich prosodic variations, particularly Chinese, where significant pitch variations were observed. We further support this finding with qualitative analysis.\\n\\n1. Since k-means can be seen as optimizing the reconstruction loss between input and reconstruction, we present reconstruction loss curves under different k-means and VQ configurations.\", \"we_compare_four_configurations\": \"VQ-8192 (which is the same as in our paper), VQ-2048, k-means-8192, and k-means-2048.\\nThe loss curves can be found at this [link](https://github.com/maskgct/maskgct/raw/refs/heads/main/recon_loss_for_vq.PNG).\\n\\n2. The information preservation in semantic tokens directly affects the semantic-to-acoustic model's prediction performance. We present the top-10 accuracy (shown in this [link](https://raw.githubusercontent.com/maskgct/maskgct/refs/heads/main/soundstorm_layer1_acc.PNG)) of the semantic-to-acoustic model in predicting the first layer of acoustic tokens. The results demonstrate that VQ-8192 outperforms VQ-2048, which in turn outperforms k-means-8192.\\n\\n3. We investigate the impact of different semantic representation approaches on acoustic token reconstruction. We train separate semantic-to-acoustic (S2A) models for each configuration and evaluate their performance through speech reconstruction metrics. Across all three test sets, the results reveal a consistent performance ranking in similarity (SIM) scores, with VQ-8192 yielding the highest performance, followed sequentially by VQ-2048, k-means-8192, and k-means-2048. For WER, VQ-based methods also demonstrate superior performance over k-means approaches, though this advantage is less pronounced on LibriSpeech test-clean. Notably, on SeedTTS test-zh (Chinese test set), k-means exhibits a substantial degradation in WER. We attribute this to the stronger coupling between Chinese semantics and prosodic features, where the transition from VQ to k-means results in significant loss of prosodic information in the semantic representations.\\n\\n| Semantic Codec | LibriSpeech *test-clean* | | SeedTTS *test-en* | | SeedTTS *test-zh* | |\\n|--------------|----------|---------|----------|---------|----------|---------|\\n| | SIM-O \\u2191 | WER \\u2193 | SIM-O \\u2191 | WER \\u2193 | SIM-O \\u2191 | WER \\u2193 |\\n| k-means 2048 | 0.648 | 3.013 | 0.658 | 3.989 | 0.691 | 11.420 |\\n| k-means 8192 | 0.661 | 2.862 | 0.664 | 3.012 | 0.713 | 8.782 |\\n| VQ 2048 | *0.671* | **2.177** | *0.692* | *3.187* | *0.744* | *4.913* |\\n| VQ 8192 | **0.680** | *2.223* | **0.713** | **2.175** | **0.763** | **2.088** |\\n\\nWe believe these additions will strengthen our argument and offer a clearer understanding of the impact of different semantic token approaches on model performance.\\n\\n## Questions\\n\\n``Q1: In Table 1, does \\\"ZS TTS\\\" denote \\\"zero-shot TTS\\\", and what's \\\"CL TTS\\\"? For \\\"Imp. Dur.\\\", I also don't fully agree that this work implicitly models duration, as it requires a specified length of the target generated speech.``\\n\\nYes, \\\"ZS TTS\\\" denotes \\\"zero-shot\\\", \\\"CL TTS\\\" denotes cross-lingual TTS. For the latter part of the question, we are primarily referring to the absence of explicit phone-level duration modeling. We appreciate the feedback on the unclear definitions. We will add more detailed explanations in the revised paper.\\n\\n``Q2: In Section 3.2.3, why \\\"the number of frames in the semantic token sequence is equal to the sum of the frames in the prompt acoustic sequence and the target acoustic sequence\\\"?``\\n\\nBecause our semantic tokens and acoustic tokens both have a frame rate of 50Hz, while the acoustic tokens consist of multiple layers, each with a frame rate of 50Hz. More accurately, the semantic token sequence input to the semantic-to-acoustic model is a concatenation of the semantic tokens corresponding to the prompt speech and the target speech.\\n\\nIt's worth noting that semantic and acoustic tokens don't necessarily need to share the same frame rate, as long as they maintain a fixed ratio of tokens generated per unit time.\"}", "{\"comment\": \"Thank you again for your great efforts and the valuable comments.\\n\\nWe have carefully addressed the main concerns in detail. We hope you might find the response satisfactory. As the paper discussion phase is coming to an end, we would be grateful if we could hear your feedback regarding our answers to the reviews. We will be very happy to clarify any remaining points (if any).\\n\\nWe sincerely hope you can consider increasing the score if you find our reply solves your concerns.\"}", "{\"comment\": \"Thank you for your feedback! If you have any remaining concerns or suggestions to help further enhance this research, please feel free to comment, and we are more than willing to continue addressing these issues.\"}", "{\"comment\": \"Thank you again for your great efforts and the valuable comments.\\n\\nWe have carefully addressed the main concerns in detail. We hope you might find the response satisfactory. As the paper discussion phase is coming to an end, we would be grateful if we could hear your feedback regarding our answers to the reviews. We will be very happy to clarify any remaining points (if any).\"}", "{\"comment\": \"Thanks again for your valuable and constructive feedback. Since the rebuttal phase is nearing its end, we would appreciate it if you could let us know whether our further reply has addressed your concern. If you have any further suggestions or concerns, we are more than willing to provide more discussion as soon as possible.\"}", "{\"summary\": \"This paper proposes a masked generative codec transformer, MaskGCT, which performs speech synthesis using a mask-predict approach. MaskGCT generates semantic tokens from text input and is composed of two components: text-to-semantic MaskGCT, which generates semantic tokens from text, and semantic-to-acoustic MaskGCT, which generates acoustic tokens from semantic tokens. Each MaskGCT operates by filling masked parts based on confidence scores predicted in stages, from a sequence where all discrete speech tokens are masked at the start. Instead of using the commonly used method of obtaining semantic tokens through k-means clustering applied to pre-trained SSL models, MaskGCT employs a separate VQ-VAE model on SSL features to generate semantic tokens, reducing information loss through reconstruction loss. MaskGCT uses a similar mask prediction method for predicting discrete speech tokens as Soundstorm, which is based on MaskGIT. However, unlike Soundstorm, MaskGCT approaches the semantic token generation from text through masked token modeling and includes an additional module for total length prediction to align sequence lengths. MaskGCT demonstrates superior speaker similarity in widely used zero-shot TTS evaluation compared to various zero-shot TTS models. Moreover, a demo page with samples shows the versatility of MaskGCT across various tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper shows that a mask-predict approach on discrete tokens can achieve a high level of speaker similarity.\", \"Samples on the demo page show that MaskGCT enables zero-shot TTS that effectively captures not only timbre but also emotion and style across a variety of voices, with excellent sample quality.\", \"To assess how well the model captures accent and emotion, the paper introduces metrics such as Accent SIM and Emotion SIM, which utilize representations reflecting each attribute. Their results indicate that MaskGCT performs better in imitating accent and emotion than other approaches based on these metrics.\"], \"weaknesses\": [\"In Soundstorm, the mask-predict approach is introduced to acoustic token generation. It seems that the methodological novelty in MaskGCT lies mainly in extending this mask-predict approach to semantic token generation.\", \"The paper utilizes VQ-VAE to reduce information loss in semantic tokens compared to the k-means clustering approach; however, it does not experimentally demonstrate how this approach improves over k-means clustering.\", \"Additionally, while using a larger number of semantic tokens may yield good performance in zero-shot TTS, the semantic tokens do not seem disentangled from speaker information, making MaskGCT less suitable for voice conversion tasks. The voice conversion samples on the demo page also appear less similar to the reference speaker.\", \"When comparing single-sample generation, MaskGCT's pronunciation accuracy appears lower than that of VoiceBox or NaturalSpeech 3.\", \"Regarding the seed-TTS evaluation method, although it is not explicitly shown as a baseline in the paper, the objective metrics appear to be worse than those of seed-TTS.\"], \"questions\": [\"Comments\", \"It would be helpful to mention the frame rate of semantic and acoustic tokens.\", \"I am curious whether only a single sampling step is used from the third layer during acoustic token generation. Is there any difference when using more sampling steps?\", \"It would be beneficial to also present the overall sampling speed of the model, including the Real-Time Factor.\", \"In Table 1, it seems that VoiceBox's Imp. Dur. should be marked as \\\"X.\\\"\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks again for your feedback. We would like to know if there are any further questions or issues we can address. We are committed to engaging fully and will reply as promptly as possible. If you have any remaining concerns or suggestions to help further enhance this research, please feel free to comment, and we are more than willing to continue addressing these issues.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thanks again for your valuable comments, we would be grateful if we could hear your feedback regarding our answers. We would be happy to answer and discuss if you have further comments.\"}", "{\"comment\": \"**VQ semantics vs Kmeans semantics**\\n\\nFor the third point, in our paper, our main focus is on how semantic tokens obtained through better discretization methods can better aid in predicting acoustic tokens to reconstruct high-quality speech waveforms. The reconstruction loss of different discretization methods (VQ vs. K-means) supports our claim that \\\"This approach minimizes the information loss of semantic features even with a single codebook.\\\" Moreover, semantic tokens based on VQ actually lead to better speech similarity and WER, which are reasonable metrics for evaluating zero-shot TTS. As for the description of semantic features, we discuss this in the appendix: \\\"In this paper, we refer to the speech representation extracted from the speech self-supervised learning (SSL) model as the semantic feature. The discrete tokens obtained through the discretization of these semantic features (using k-means or vector quantization) are termed semantic tokens. Similarly, we define the representations from melspectrogram, neural speech codecs, or speech VAE as acoustic features, and their discrete counterparts are called acoustic tokens. This terminology was first introduced in [68] and has since been adopted by many subsequent works [8, 19, 39, 69, 70]. It is important to note that this is not a strictly rigorous definition. Generally, we consider semantic features or tokens to contain more prominent linguistic information and exhibit stronger correlations with phonemes or text. One measure of this is the phonetic discriminability in terms of the ABX error rate. In this paper, the W2v-BERT 2.0 features we use have a phonetic discriminability within less than 5 on the LibriSpeech dev-clean dataset, whereas acoustic features, for example, Encodec latent features, score above 20 on this metric. However, it is worth noting that semantic features or tokens not only contain semantic information but also include prosodic and timbre aspects. In fact, we suggest that for certain two-stage zero-shot TTS systems, excessive loss of information in semantic tokens can degrade the performance of the second stage, where semantic-to-acoustic conversion occurs. Therefore, finding a speech representation that is more suitable for speech generation remains a challenging problem.\\\"\\n\\n[8] Naturalspeech 3: Zero-shot speech synthesis with factorized codec and diffusion models. arXiv preprint arXiv:2403.03100, 2024.\\n[19] Soundstorm: Efficient parallel audio generation. arXiv preprint arXiv:2305.09636, 2023.\\n[39] Repcodec: A speech representation codec for speech tokenization. arXiv preprint arXiv:2309.00169, 2023.\\n[68] Audiolm: a language modeling approach to audio generation. IEEE/ACM transactions on audio, speech, and language processing, 31:2523\\u20132533, 2023.\\n[69] Fireredtts: A foundation text-to-speech framework for industry-level generative speech applications. arXiv preprint arXiv:2409.03283, 2024.\\n[70] Speechtokenizer: Unified speech tokenizer for speech large language models. arXiv preprint arXiv:2308.16692, 2023.\"}", "{\"metareview\": \"The paper introduces MaskGCT, a two-stage NAR TTS system that predicts semantic tokens from text and then acoustic tokens from semantic tokens using a masked generative transformer. MaskGCT demonstrates superior speaker similarity in widely used zero-shot TTS evaluation compared to various zero-shot TTS models. The main contribution of this paper is to show that it is possible to train a NAR TTS model with a simple pipeline without requiring explicit phone-speech alignment.\", \"additional_comments_on_reviewer_discussion\": \"Most reviewers (QWox, AvMh, EzBH) maintained their scores around the acceptance threshold (6), acknowledging the authors\\u2019 rebuttals addressed several points and improved clarity. Reviewer n456 remained unconvinced on novelty, keeping a low score (3).\\n\\nAll reviewers raised concerns about the novelty of the work. Reviewer QWox noted that the mask-predict approach is applied to acoustic token generation, with the primary methodological novelty of MaskGCT being its extension of this approach to semantic token generation. Reviewer n456 commented that the novelty and technical contributions seem limited, as the core methodology largely combines existing work, specifically MaskGIT and SPEAR-TTS. Reviewer AvMh remarked that this work aligns with SPEAR-TTS and pointed out that the SoundStorm paper also explores a similar strategy. Furthermore, the application of a non-autoregressive masked generative transformer for codec-based TTS has already been presented in SoundStorm and NaturalSpeech3. Reviewer EzBH highlighted that using text as a condition without duration expansion for a NAR model was proposed in e2-TTS.\", \"author_reply\": \"The authors show that the VQ-VAE approach is superior to the k-means approach from the TTS perspective.\", \"note\": \"The author's reply addresses the issue.\\n\\nAC's opinion:\\n\\nThe main consideration for whether to accept this paper lies in its novelty, which can be subjective. This paper can be seen as an integration of SPEAR-TTS (existing two-stage TTS methods) and MaskGIT (from computer vision). Although MaskGIT itself is not novel, I was genuinely surprised to discover that TTS can be trained using MaskGIT without requiring text and speech alignment. From this perspective, I am inclined to accept this paper.\\n\\nHowever, a very recent work, E2-TTS, has also demonstrated that non-autoregressive TTS can be achieved without text and speech alignment. This paper can be regarded as a two-stage implementation of E2-TTS (although they have different diffusion mechanisms). With E2-TTS in mind, the findings presented in this paper are not as surprising. Additionally, this paper does not include a comparison with E2-TTS. But, E2-TTS was published at the end of June. To the best of my knowledge, papers published after July are considered concurrent work according to ICLR's review guidelines, placing E2-TTS in a grey area.\\n\\nIn the end, I recommend this paper for acceptance. However, I would not object if the SAC decides to reject it.\"}", "{\"title\": \"Reply to Reviewer n456 (Part 1)\", \"comment\": \"We sincerely thank the reviewer for their thorough review and valuable constructive feedback. We are also grateful for your recognition of our work's impact on the research community. We address each of your concerns in detail below.\\n\\n## Weaknesses\\n\\n``Weakness 1: Limited novelty as the method mainly combines existing works (MaskGIT, SpearTTS) with no apparent novel contributions in acoustic modeling (using DAC).``\\n\\nWe would like to highlight MaskGCT's significant contributions at both representation and modeling levels: (1) At the modeling level, MaskGCT innovatively applies masked generative modeling to both text-to-semantic and semantic-to-acoustic generation. While this approach was originally developed for image generation (MaskGIT), its adaptation to TTS presents unique challenges and opportunities. Notably, MaskGCT achieves non-autoregressive TTS without requiring explicit phone-speech alignment supervision or phone-level duration prediction, which significantly simplifies the traditional TTS pipeline while achieving more natural and consistent generation results.\\n(2) At the representation level, MaskGCT introduces VQ-based semantic tokens, which represents a distinct approach compared to previous methods that relied on k-means clustering.\\n\\n``Weakness 2: The claim about NAR methods being complex due to duration prediction needs better justification, as duration modeling in models like NaturalSpeech3/FastSpeech2 is relatively straightforward.``\", \"we_would_like_to_clarify_that_the_complexity_of_phone_level_duration_prediction_arises_from_several_key_aspects\": \"1. Implementation Complexity: Phone-level duration prediction requires pre-extracted phone-level durations as supervision and additional modeling modules, which becomes more challenging in zero-shot scenarios and with in-the-wild or noisy data. For instance, NaturalSpeech 3 employs a Conformer-based duration predictor with discrete diffusion, while [1] relies on diffusion models.\\n\\n2. Computational Overhead: The lack of efficient tools for phone-level duration extraction makes it computationally intensive, especially when processing large-scale datasets.\\n\\n3. Accuracy Limitations: Phone-level duration extraction from in-the-wild data often lacks accuracy, potentially compromising model performance. Evidence from [2] shows that VoiceBox's reproduction on the Emilia dataset, which depends on phone-level duration prediction, achieved lower naturalness and similarity scores compared to autoregressive models that avoid such prediction.\\n\\n4. Scalability Considerations: Our approach aims to reduce dependence on such priors while leveraging more powerful generative models and larger datasets to enhance generation quality. We acknowledge this may not fully apply to traditional small-data TTS scenarios, where phone-level duration prediction might be necessary for model convergence, as demonstrated in flow-matching-based works [3,4]. To validate our approach, we conducted additional experiments training T2S models on smaller datasets (LibriTTS and a 1K-hour Emilia subset) while maintaining the self-supervised S2A model. Results demonstrate robust performance even with limited training data, which we attribute to the effectiveness of semantic token prediction and the strong modeling capabilities of masked generative approaches.\\n\\n| Model | SIM-O \\u2191 | WER \\u2193 |\\n|-------|----------|---------|\\n|| **SeedTTS *test-en*** ||\\n| MaskGCT (LibriTTS 0.58K hours) | 0.677 | 3.043 |\\n| MaskGCT (Emilia 1K hours) | 0.696 | 3.378 |\\n| MaskGCT (Emilia 100K hours) | 0.728 | 2.466 |\\n|| **SeedTTS *test-zh*** ||\\n| MaskGCT (Emilia 1K hours) | 0.754 | 3.012 |\\n| MaskGCT (Emilia 100K hours) | 0.777 | 2.183 |\\n\\n``Weakness 3: The Related Work section omits important flow-based methods (VoiceBox, p-flow) that use masked generative models.``\\n\\nWe would like to note that VoiceBox was introduced in the first paragraph of our Related Work section and served as a baseline in our experimental comparisons. We acknowledge the oversight in not including the p-flow paper and will incorporate it in the revised version.\\n\\nAdditionally, we want to clarify some potential misconception. VoiceBox is a flow matching-based TTS system, and its mask is designed to serve as a prompt during training, which is different from the \\\"mask\\\" in masked generative models. We believe the \\\"mask\\\" in masked generative models is more akin to the noise in diffusion models, as masked generative models have a discrete diffusion perspective.\\n\\n``Weakness 4: Table 1's prominence suggests multi-task capability as the main contribution, while the focus should be on concrete improvements in performance and efficiency.``\\n\\nThank you for your suggestions! We will reorganize it in the revised version of the paper.\"}", "{\"comment\": [\"Thank you very much for your valuable feedback!\", \"I agree that both SoundStorm and NaturalSpeech 3 are codec-based TTS systems. In SoundStorm, it uses an AR (Autoregressive) model as the first stage and an NAR (Non-Autoregressive) model as the second stage. Robustness and inference speed are known issues in AR models. Unlike AR-based models, NaturalSpeech 3 explicitly employs a phone duration predictor and then obtains the frame-level phoneme as a condition to predict the frame-level prosody tokens, followed by predicting the frame-level content tokens and acoustic tokens. This means it requires pre-extracted phone duration for model training, and the performance of phone duration prediction impacts the overall prosody of the generated speech. Unlike SoundStorm and NaturalSpeech 3, MaskGCT does not use phone-level duration. We only need phone-level conditioning to predict frame-level semantic tokens, or directly use text tokens obtained from BPE (we have shown the results in Appendix A.7, which indicate that using BPE does not cause significant performance loss and may even yield better WER for Chinese). Additionally, MaskGCT also explores speech semantic tokens based on VQ. In our supplementary experiments, we also investigate the differences between a two-stage model and direct text-to-acoustic modeling (the results are shown in Appendix K).\", \"We also illustrate the differences between MaskGCT and traditional NAR (Non-Autoregressive) methods in text-speech alignment modeling through a figure (in this [https://raw.githubusercontent.com/maskgct/maskgct/refs/heads/main/aatn_map.png](https://raw.githubusercontent.com/maskgct/maskgct/refs/heads/main/aatn_map.png)). For traditional NAR TTS systems, a duration predictor and length regulator are first used to obtain frame-level conditions that provide alignment information. For MaskGCT, similar to AR (Autoregressive) methods, we concatenate the text in front of the speech tokens through in-context learning, and the model implicitly learns speech-text alignment through self-attention.\", \"In addition, we provide attention maps at different stages of inference (steps 1, 10, 20) and across various layers of the model (layer 1, layer 6, layer 16) in this [https://raw.githubusercontent.com/maskgct/maskgct/refs/heads/main/ar_vs_nar_vs_maskgct.png](https://raw.githubusercontent.com/maskgct/maskgct/refs/heads/main/ar_vs_nar_vs_maskgct.png). These attention maps demonstrate that our model implicitly learns the alignment between text and speech. It shows that the model handles the alignment between speech tokens and text in the middle layers of the network, while in the deeper layers, the model has already resolved the alignment and focuses on processing the tokens.\", \"For the second concern, we appreciate the highly constructive comments, and we will incorporate the results with the rule-based duration into Table 1, as we agree that this can better substantiate our claim that text-speech alignment supervision or phone-level duration prediction is unnecessary. We further emphasize and discuss this point in Appendix A.5. Additionally, the absence of phone-level duration means that we can simply adjust the total duration to control the tempo of the generated speech. In Section 4.2.3 Duration Length Analysis, we demonstrate that MaskGCT exhibits robustness across gt duration lengths ranging from gt len * 0.7 to gt len * 1.3, and at the demo page \\\"Speech Tempo Controllability\\\", we showcase the demonstrate of the produced speech across varying total durations (gt len * 0.6 to gt len * 1.2), all of which exhibit good naturalness.\", \"Thanks again for your valuable comments. We have made every effort to address your concerns and enhance our paper, and we kindly hope you can reconsider our paper. If you have any further suggestions or concerns, we are more than willing to provide more discussion.\"]}", "{\"title\": \"Reply to Reviewer QWox (Part 1)\", \"comment\": \"First of all, we want to thank the reviewer for your careful reading and providing a lot of constructive comments! Below we address the concerns mentioned in the review.\\n\\n## Weaknesses\\n\\n``Weakness 1: The main novelty appears to be extending Soundstorm's mask-predict approach from acoustic to semantic token generation.``\\n\\nMaskGCT employs masked generative modeling for both text-to-semantic and semantic-to-acoustic generation. Beyond this, we would like to highlight that MaskGCT's novelty lies in two key aspects: (1) MaskGCT explores non-autoregressive TTS without requiring explicit phone-speech alignment supervision or phone-level duration prediction, which significantly simplifies the traditional TTS pipeline while achieving more natural and consistent generation results. (2) MaskGCT introduces VQ-based semantic tokens, which represents a distinct approach compared to previous methods that relied on k-means clustering.\\n\\n``Weakness 2: The paper lacks experimental comparison between the proposed VQ-VAE and k-means clustering for semantic tokens``\\n\\nThank you for your valuable suggestion. This question has also been raised by other reviewers, and we believe a thorough investigation of this matter will significantly strengthen our paper. We will first present our empirical findings (which are already included in our paper), followed by additional experimental results. **All these analyses will be incorporated into the revised version of our paper.**\\n\\nIn our initial experiments, we observed that k-means-based semantic tokens were less effective in predicting acoustic tokens for languages with rich prosodic variations, particularly Chinese, where significant pitch variations were observed. We further support this finding with qualitative analysis.\\n\\n1. Since k-means can be seen as optimizing the reconstruction loss between input and reconstruction, we present reconstruction loss curves under different k-means and VQ configurations.\", \"we_compare_four_configurations\": \"VQ-8192 (which is the same as in our paper), VQ-2048, k-means-8192, and k-means-2048.\\nThe loss curves can be found at this [link](https://github.com/maskgct/maskgct/raw/refs/heads/main/recon_loss_for_vq.PNG).\\n\\n2. The information preservation in semantic tokens directly affects the semantic-to-acoustic model's prediction performance. We present the top-10 accuracy (shown in this [link](https://raw.githubusercontent.com/maskgct/maskgct/refs/heads/main/soundstorm_layer1_acc.PNG)) of the semantic-to-acoustic model in predicting the first layer of acoustic tokens. The results demonstrate that VQ-8192 outperforms VQ-2048, which in turn outperforms k-means-8192.\\n\\n3. We investigate the impact of different semantic representation approaches on acoustic token reconstruction. We train separate semantic-to-acoustic (S2A) models for each configuration and evaluate their performance through speech reconstruction metrics. Across all three test sets, the results reveal a consistent performance ranking in similarity (SIM) scores, with VQ-8192 yielding the highest performance, followed sequentially by VQ-2048, k-means-8192, and k-means-2048. For WER, VQ-based methods also demonstrate superior performance over k-means approaches, though this advantage is less pronounced on LibriSpeech test-clean. Notably, on SeedTTS test-zh (Chinese test set), k-means exhibits a substantial degradation in WER. We attribute this to the stronger coupling between Chinese semantics and prosodic features, where the transition from VQ to k-means results in significant loss of prosodic information in the semantic representations.\\n\\n| Semantic Codec | LibriSpeech *test-clean* | | SeedTTS *test-en* | | SeedTTS *test-zh* | |\\n|--------------|----------|---------|----------|---------|----------|---------|\\n| | SIM-O \\u2191 | WER \\u2193 | SIM-O \\u2191 | WER \\u2193 | SIM-O \\u2191 | WER \\u2193 |\\n| k-means 2048 | 0.648 | 3.013 | 0.658 | 3.989 | 0.691 | 11.420 |\\n| k-means 8192 | 0.661 | 2.862 | 0.664 | 3.012 | 0.713 | 8.782 |\\n| VQ 2048 | *0.671* | **2.177** | *0.692* | *3.187* | *0.744* | *4.913* |\\n| VQ 8192 | **0.680** | *2.223* | **0.713** | **2.175** | **0.763** | **2.088** |\\n\\nThese comprehensive analyses demonstrate the advantages of VQ-based semantic tokens over k-means clustering, providing strong empirical evidence to support our design choices in MaskGCT.\"}", "{\"title\": \"Thanks for the detailed response\", \"comment\": \"Thanks for the detailed reply!\\n\\nI think the supplemented experiments comparing different methods for quantizing the semantic representation ( VQ-VAE vs. k-means clustering) have addressed my concern. \\n\\nHowever, I still have two main concerns:\\n\\n- There are already codec-based TTS works, such as SoundStorm and NaturalSpeech3, that apply non-autoregressive masked generative modeling. Therefore, I still have concerns regarding the novelty of this aspect.\\n- The authors have supplemented experiments using a rule-based method to determine the total duration length for inference, which I agree does not require text-speech alignment supervision or phone-level duration prediction in training. However, the results in Table 1 comparing the proposed system with other state-of-the-art models do not present this method. I think this is important, as the authors emphasize a key contribution of this work in abstract: \\\"eliminates the need for explicit alignment information between text and speech supervision, as well as phone-level duration prediction\\\". \\n\\nPlease let me know if I have misunderstood.\"}", "{\"title\": \"Reply to Reviewer QWox (Part 3)\", \"comment\": \"``Q4: In Table 1, it seems that VoiceBox's Imp. Dur. should be marked as \\\"X.\\\"``\\n\\nThank you for the reminder! We will fix it in the revised version.\\n\\nThanks again for your constructive comments. We would be grateful if we could hear your feedback regarding our answers to the reviews. We would be happy to answer and discuss if you have further comments.\\n\\n[1] Lee S H, Choi H Y, Kim S B, et al. Hierspeech++: Bridging the gap between semantic and acoustic representation of speech by hierarchical variational inference for zero-shot speech synthesis[J]. arXiv preprint arXiv:2311.12454, 2023.\\n\\n[2] Wang Z, Chen Y, Xie L, et al. Lm-vc: Zero-shot voice conversion via speech generation based on language models[J]. IEEE Signal Processing Letters, 2023.\\n\\n[3] Yang D, Tian J, Tan X, et al. Uniaudio: An audio foundation model toward universal audio generation[J]. arXiv preprint arXiv:2310.00704, 2023.\"}" ] }
ExrEw8cVlU
Poison-splat: Computation Cost Attack on 3D Gaussian Splatting
[ "Jiahao Lu", "Yifan Zhang", "Qiuhong Shen", "Xinchao Wang", "Shuicheng YAN" ]
3D Gaussian splatting (3DGS), known for its groundbreaking performance and efficiency, has become a dominant 3D representation and brought progress to many 3D vision tasks. However, in this work, we reveal a significant security vulnerability that has been largely overlooked in 3DGS: the computation cost of training 3DGS could be maliciously tampered by poisoning the input data. By developing an attack named Poison-splat, we reveal a novel attack surface where the adversary can poison the input images to drastically increase the computation memory and time needed for 3DGS training, pushing the algorithm towards its worst computation complexity. In extreme cases, the attack can even consume all allocable memory, leading to a Denial-of-Service (DoS) that disrupts servers, resulting in practical damages to real-world 3DGS service vendors. Such a computation cost attack is achieved by addressing a bi-level optimization problem through three tailored strategies: attack objective approximation, proxy model rendering, and optional constrained optimization. These strategies not only ensure the effectiveness of our attack but also make it difficult to defend with simple defensive measures. We hope the revelation of this novel attack surface can spark attention to this crucial yet overlooked vulnerability of 3DGS systems. Our code is available at https://github.com/jiahaolu97/poison-splat .
[ "gaussian splatting", "computation cost attack", "energy-latency attack", "data poisoning attack", "3D security", "AI security" ]
Accept (Spotlight)
https://openreview.net/pdf?id=ExrEw8cVlU
https://openreview.net/forum?id=ExrEw8cVlU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zjU0Bpnfk5", "wYZ4MQ0btM", "wG4rUA4vZQ", "sobyO0rcxF", "pYLhW9v6Fz", "oc0yfp4m8E", "nrcxqGTfBu", "mzQINJj2un", "mYXPQ9G5LN", "mXoIdTIbH4", "lsznebQBkT", "hmqT00yueQ", "gXobBz2aFS", "bvkIbB0Vb4", "b73C873x4o", "b4LWb1oq3Y", "aTtroaY3bo", "YwU29O5gvL", "XS45xKBJTd", "XHUxVgChh8", "O2QPfJXdms", "JG9LUrh06m", "Hdy3YarMG2", "F8l5qOAgBj", "CAHbMaVU5U", "4KgMvC9Hco", "0vgpndXRMT", "0vERBeukT7" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732610730154, 1732631122051, 1732297620946, 1732294154481, 1732296144067, 1732716580557, 1732297812521, 1732635899316, 1732295209330, 1732296528346, 1732293400962, 1732582551604, 1732297253692, 1730662375186, 1732717937537, 1732297466400, 1730623737816, 1732609709910, 1732295883218, 1732295258981, 1732295747400, 1734618337391, 1730169704675, 1730595355608, 1737523427216, 1732582523232, 1732582466127, 1732582486528 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission968/Authors" ], [ "ICLR.cc/2025/Conference/Submission968/Reviewer_ybEK" ], [ "ICLR.cc/2025/Conference/Submission968/Authors" ], [ "ICLR.cc/2025/Conference/Submission968/Authors" ], [ "ICLR.cc/2025/Conference/Submission968/Authors" ], [ "ICLR.cc/2025/Conference/Submission968/Reviewer_THFm" ], [ "ICLR.cc/2025/Conference/Submission968/Authors" ], [ "ICLR.cc/2025/Conference/Submission968/Authors" ], [ "ICLR.cc/2025/Conference/Submission968/Authors" ], [ "ICLR.cc/2025/Conference/Submission968/Authors" ], [ "ICLR.cc/2025/Conference/Submission968/Authors" ], [ "ICLR.cc/2025/Conference/Submission968/Authors" ], [ "ICLR.cc/2025/Conference/Submission968/Authors" ], [ "ICLR.cc/2025/Conference/Submission968/Reviewer_SqKM" ], [ "ICLR.cc/2025/Conference/Submission968/Authors" ], [ "ICLR.cc/2025/Conference/Submission968/Authors" ], [ "ICLR.cc/2025/Conference/Submission968/Reviewer_ybEK" ], [ "ICLR.cc/2025/Conference/Submission968/Reviewer_SqKM" ], [ "ICLR.cc/2025/Conference/Submission968/Authors" ], [ "ICLR.cc/2025/Conference/Submission968/Authors" ], [ "ICLR.cc/2025/Conference/Submission968/Authors" ], [ "ICLR.cc/2025/Conference/Submission968/Area_Chair_NVSV" ], [ "ICLR.cc/2025/Conference/Submission968/Reviewer_THFm" ], [ "ICLR.cc/2025/Conference/Submission968/Reviewer_gaGQ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission968/Authors" ], [ "ICLR.cc/2025/Conference/Submission968/Authors" ], [ "ICLR.cc/2025/Conference/Submission968/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We sincerely appreciate your constructive and detailed feedback, which has greatly contributed to improving the quality of our paper. Your recognition of our efforts is highly encouraging, and we are delighted to have addressed your concerns to your satisfaction. Thank you once again for your valuable suggestions and thoughtful review.\"}", "{\"comment\": \"Thanks for your detailed response. All my concerns have been successfully resolved and I have raised my rating. Please update the additional results and discussions to your paper. I believe they could make the work more comprehensive.\"}", "{\"title\": \"Response to Reviewer THFm (Split 4 of 5)\", \"comment\": \"**Q5. Could authors add more common models to test effectiveness of black-box and white-box attacks?**\\n>`Q5. Could the authors add more common models to compare the effectiveness of black-box and white-box attacks?`\\n\\nSure. In the initial paper we have provided white-box attacks for 3D Gaussian Splatting (c.f. Section 4.1, Section D.1), as well as black-box attacks for Scaffold-GS (c.f. Section 4.2, Section D.2).\\n\\nTo further test the effectiveness of our approach, we propose conducting experiments on Mip-Splatting [1], which received the best student paper award at CVPR 2024 and has been highly popular (with 178 citations as of November 20th, 2024). Mip-Splatting is an advanced variant of the original 3D Gaussian Splatting, incorporating a 3D smoothing filter and a 2D Mip filter. These enhancements help eliminate various artifacts and achieve alias-free renderings. Part of results are shown in the following table. \\n\\n*Constrained Black-box attack agaisnt Mip-Splatting*\\n\\n| Scene | Number of Gaussians (poisoned) | GPU Memory (poisoned) | Training Time (poisoned) |\\n|:--:|:--:|:--:|:--:|\\n| eps16-NS-chair | 1.088 M (2.92x) | 33042 MB (5.23x) | 13.75 min (1.75x) |\\n| eps16-NS-hotdog | 1.437 M (7.18x) | 58878 MB (11.01x) | 20.08 min (2.84x) |\\n| eps16-NS-lego | 1.127 M (3.67x) | 48682 MB (7.27x) | 16.43 min (2.26x) |\\n| eps16-NS-ship | 1.793 M (5.43x) | 61026 MB (9.39x) | 21.05 min (2.41x) |\\n| eps16-MIP-bicycle | **DoS** | **DoS** | **DoS** |\\n| eps16-MIP-bonsai | 13.016 M (7.79x) | 80826 MB (2.62x) | 61.82 min (2.61x) |\\n| eps16-MIP-counter | 8.329 M (5.58x) | 79904 MB (4.10x) | 56.75 min (2.16x) |\\n| eps16-MIP-room | 12.949 M (6.23x) | 81130 MB (2.57x) | 77.83 min (2.87x) |\\n\\n\\n*Unconstrained Black-box Poison-splat attack against Mip-Splatting*\\n\\n| Scene | Number of Gaussians (poisoned) | GPU Memory (poisoned) | Training Time (poisoned) |\\n|:--:|:--:|:--:|:--:|\\n| unconstrained-NS-chair | 6.106 M (16.41x) | 80732 MB (12.78x) | 57.73 min (7.35x) |\\n| unconstrained-NS-hotdog| 6.876 M (34.38x) | 80630 MB (15.07x) | 59.80 min (8.45x) |\\n| unconstrained-NS-lego | 6.472 M (21.08x) | 80668 MB (12.05x) | 64.93 min (8.92x) |\\n| unconstrained-NS-ship | 6.848 M (20.75x) | 81236 MB (12.50x) | 66.15 min (7.56x) |\\n| unconstrained-MIP-bicycle | **DoS** | **DoS** | **DoS** |\\n| unconstrained-MIP-bonsai | **DoS** | **DoS** | **DoS** |\\n| unconstrained-MIP-counter | **DoS** | **DoS** | **DoS** |\\n| unconstrained-MIP-room | **DoS** | **DoS** | **DoS** |\\n\\nWe found that Mip-Splatting consumes more GPU memory compared to the original 3D Gaussian Splatting, making it more prone to the worst attack consequence of running Out-of-Memory. As illustrated in above table, even when the attack perturbation is constrained to $\\\\epsilon=16/255$, the GPU memory consumption nearly reached the 80 GB capacity of Nvidia A800. When we apply an unconstrained attack, all scenes in the MIP-NeRF360 dataset will result in denial-of-service. \\n\\nWe examined the code implementation of Mip-Splatting (https://github.com/autonomousvision/mip-splatting), and identified that it uses quantile computation in its Gaussian model densification function (In Line 524 of `mip-splatting/scene/gaussian_model.py`) which requires massive GPU memory and easily triggers out-of-memory. Our attack highlights the vulnerability in various 3DGS algorithm implementations.\\n\\n**We put the full result of black-box attack on Mip-Splatting in Appendix Section D.2 and Table 6.**\\n\\n***Reference***\\n\\n*[1] Yu, Zehao, et al. \\\"Mip-splatting: Alias-free 3d gaussian splatting.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.*\"}", "{\"title\": \"Response to Reviewer SqKM (split 2 of 4)\", \"comment\": \"---\\n**W3&Q3. Quadratic complexity and practical concern.**\\n>`W3. Algorithm 1 indicates that generating backdoor samples with Poison-Splat requires a quadratic time complexity relative to the number of iterations, which raises concerns about the practicality of this approach during training.`\\n\\n>`Q3. Does the quadratic time complexity of Algorithm 1 raise practical concerns during training?`\\n\\nAccording to Algorithm 1, we need to solve a bi-level optimization problem, and the time complexity is related to $T$ (inner iterations) and $\\\\tilde{T}$ (outer iterations). However in practice, our inner iterations $T$ are far fewer than outer iterations $\\\\tilde{T}$, and the actual time cost of attacker is even less than victim's training time, which is an advantage of our attack. \\n\\nSpecifically, the total running time of algorithm 1 is decided by two parts: \\n\\n1. Train a proxy model on clean data (Line 2 of Algorithm 1)\\n2. Solve the bi-level optimization (Line 3 to Line 13 of Algorithm 1)\\n\\nFor example, for unconstrained attacks on MIP-NeRF360, we set $T=6000$ and $\\\\tilde{T}=25$, and the time of attack is totally acceptable, and is even shorter than the victim training, as shown in the table below:\\n\\n| Scene | [Attacker: Proxy training + Bi-level optimization] | [Victim: Training time] |\\n|:----:|:----:|:----:|\\n|Bicycle | 33.42 + 16.57 min | 81.48 min |\\n|Bonsai | 21.68 + 14.00 min | 75.18 min |\\n|Counter | 26.46 + 15.23 min | 62.04 min |\\n|Flowers | 25.47 + 15.05 min | 62.62 min |\\n|Garden | 33.19 + 15.07 min | 83.81 min |\\n|Kitchen | 26.61 + 14.77 min | 73.04 min |\\n|Room | 25.36 + 16.50 min | 76.25 min |\\n|Stump | 27.06 + 15.63 min | 51.51 min |\\n\\nLastly, the primary motivation of our work is to expose the severity of this security backdoor, and attack efficiency is beyond the scope of this paper. We hope to leave the improvement of attack efficiency in future studies.\\n\\n---\\n**W4&Q4(1). Would attack maintain effective in multi-GPU environment?**\\n>`W4. Would the Poison-Splat technique maintain its effectiveness when the 3DGS algorithm is trained in a multi-GPU environment? `\\n\\n>`Q4. Is Poison-Splat still effective in multi-GPU settings?`\\n\\nOur attack is hardware-agnostic, which does not assume the hardward platform victim is using. The computation resource consumption is decided by number of Gaussians. Even if 3DGS training is carried out in a multi-GPU environment, as long as the total number of Gaussians remains at the same level, Poison-Splat attack will still maintain effective.\\n\\n**W4&Q4(2). Attack should be tested on various GPUs with different clock frequencies and memory bandwidths.**\\n>`W4. Additionally, the attack should be assessed on various GPUs with different clock frequencies and memory bandwidths to evaluate the generalizability of the approach.`\\n\\n>`Q4. how does it perform across GPUs with different specs?`\\n\\nIn our paper, we conducted our attack tests using an 80GB-A100 GPU, the largest memory card available to us, to benchmark the extreme resource consumption of our attack. Following the reviewer's advice, we test our attack on another two different computation cards A5000 and RTX 4090. The detailed comparison of GPU specifications:\\n\\n| GPU type | Memory Size | Memory Bandwidth | Clock Speed | \\n|:------:|:---: | :----:|:------:|\\n|A800 | 80 GB | 2.04 TB/s | 1155 MHz | \\n|RTX 4090 | 24 GB | 1.01 TB/s | 2235 MHz |\\n|RTX A5000 | 24 GB | 768 GB/s | 1170 MHz |\\n\\nThe benchmark results (GPU memory / training time) of our poisoned dataset on these computation cards are presented in the table below. (**OOM** stands for out-of-memory error, which effectively indicates denial-of-service.)\\n\\n|Scene - $\\\\epsilon= 16/255$ | A800 | RTX 4090 | RTX A5000 | \\n|:------:|:---: | :---:|:----:|\\n| NS-hotdog | 29747 MB / 17.43 min | **OOM** | **OOM**|\\n| NS-lego | 9960 MB / 13.02 min | 9413 MB / 7.88 min | 10499 MB / 15.73 min |\\n| NS-materials | 4886 MB / 9.92 min| 3659 MB / 4.80 min | 5034 MB / 13.33 min\\n| NS-ship | 16666 MB / 16.12 min| 15339 MB / 9.82 min | 17580 MB / 20.50 min |\\n| MIP-bicycle| 27074 MB / 44.44 min | **OOM** | **OOM**|\\n| MIP-bonsai | 20564 MB / 32.51 min | 18853 MB / 28.97 min | 21670 MB / 47.11 min |\\n| MIP-counter| 29796 MB / 37.63 min |**OOM** | **OOM**|\\n| MIP-garden | 23115 MB / 40.82 min | 21297 MB / 39.72 min | **OOM** |\\n\\nDue to variations in specifications and underlying hardware architectures, the same poisoned dataset can lead to different GPU memory usage and run times on different devices. It is evident that computation cards with less memory are particularly vulnerable to the most severe consequence of our attack: out-of-memory and service disruption, even when only a constrained attack is applied. If an unconstrained attack is applied, it would result in a denial-of-service across all scenarios on these 24GB GPUs (c.f. Table 4 in Appendix Section D.1).\"}", "{\"title\": \"Response to Reviewer gaGQ\", \"comment\": \"Thanks for your effort in reviewing our paper, and we glad the reviewer recognized our paper is well written and organized, and our experiments are extensive. We respond to reviewer's concern as follows.\\n\\n**W1&Q1. Feasibility in real world.**\\n>`W1&Q1. Is the attack practically feasible in real-world scenarios, or is it only feasible in theory?`\\n\\nThank you for paying attention to the real-world feasibility. In our attack formulation, we don't impose too much assumptions on the victim 3DGS trainer, and the attack succeeds on various content of data (including MIP-NeRF360 and Tanks-and-Temples which are real-world scene captures). Even when the victim algorithm details are unknown, attack still succeeds and even frequently triggers out-of-memory error (c.f. **Q5 of Reviewer THFm**).\\n\\nThe practicality of our attack is further demonstrated by our tests on the real-world online 3DGS service platform, Polycam(https://poly.cam/tools/gaussian-splatting). Polycam supports user to upload 20-100 images captured from various angles to create a Gaussian Splatting 3D model. We upload the poisoned dataset we made using the proxy model, treating Polycam as a real-world black-box model, unaware of the underlying GPU device, nor the specific 3DGS algorithm variant Polycam is running. We can infer three information through the website interface: (1) whether training is successful (2) service responding time (3) the file size of the downloadable reconstructed 3DGS model.\\n\\nWe conducted tests of our attack using `room` scene from MIP-NeRF360. The scene has a total of 311 views, but since Polycam only supports between 20 to 100 views, we uploaded the first $v$ number of views for reconstruction and keep the uploaded views uniform for clean and poisoned cases. The service responses are recorded in the table below:\\n\\n| Number of Views | Clean file size / response time | Poisoned file size / response time |\\n|:---:|:---:|:---:|\\n|20 | 15.8 MB / 16 min | 31.5 MB / 24 min|\\n|60 | 29.0 MB / 29 min | 46.1 MB / 56 min|\\n|70 | 32.4 MB / 33 min |**Service failed** |\\n|100| 40.2 MB / 45 min | **Service failed** |\\n\\nAlthough not knowing specific algorithm variant or running device used by Polycam, our attack successfully results in a more complex 3D model and extends training time. As input views cover more of the scene, both 3D model file size and training time increase; notably, when the number of views reached 70, while clean dataset still receive a response, out poisoned dataset encounter a service failure. Although we don't have direct access to memory consumption of the online service, the increasing trend in file size provides sufficient reason to believe that our poisoning exceeds the computation resource budget of a single reconstruction service. This evidence highlights the effectiveness of our Poison-splat attack in real-world scenarios.\\n\\n**W2&Q2. Outer objective approximation seems a theoretical assumption.**\\n>`W2&Q2. In the work, the authors approximate the outer maximum objective with the number of Gaussians, which appears to be a theoretical assumption that may not apply in real-world scenarios.`\\n\\nSorry we might leave potential confusions. Our approximation is based on the fact that computation cost has strong positive correlation with number of Gaussians (c.f Figure 2(a) and Figure 2(b)). Our experimental results (c.f. Section 4) and our testing on real-world 3DGS platforms Polycam (refer to **your Q1**) fully supports this claim.\"}", "{\"comment\": \"I am appreciative of the authors' detailed response and the effort they put into providing new results. I am satisfied with their response and have improved my rating.\"}", "{\"title\": \"Response to Reviewer THFm (Split 5 of 5)\", \"comment\": \"**Q6(1). Worst consequences of the attack. Prior knowledge for victim computation resource.**\\n>`Q6(1). The primary objective of the attack is to consume excessive computational resources, but what are the most serious consequences of this? Is it a denial-of-service (DoS) scenario? Additionally, how difficult would it be to achieve significant damage? For instance, if an attacker aims to prevent a model from completing its training, would this require prior knowledge of the service provider's computational resources?`\\n\\nYes, the most severe consequence of the attack would be a service crash (i.e. denial-of-service).\\n\\nWe do not assume the attacker has the prior knowledge of victim's computational resources. The attack is hardware-agnostic, and attacker only aims to increase computation cost as much as possible. We conduct experiments on A800 since they are largest memory GPUs we can access. Referring to **Q4 of Reviewer SqKM**, if victim uses a common GPU with smaller memory (e.g. A5000 or RTX 4090), it is quite easy to trigger an out-of-memory error (i.e. denial-of-service), as illustrated in the table below.\\n\\n|Scene - $\\\\epsilon= 16/255$ | A800 | RTX 4090 | RTX A5000 | \\n|:--:|:--:|:--:|:--:|\\n| NS-hotdog | 29747 MB / 17.43 min | **OOM** | **OOM**|\\n| NS-lego | 9960 MB / 13.02 min | 9413 MB / 7.88 min | 10499 MB / 15.73 min |\\n| NS-materials | 4886 MB / 9.92 min| 3659 MB / 4.80 min | 5034 MB / 13.33 min\\n| NS-ship | 16666 MB / 16.12 min| 15339 MB / 9.82 min | 17580 MB / 20.50 min |\\n| MIP-bicycle| 27074 MB / 44.44 min | **OOM** | **OOM**|\\n| MIP-bonsai | 20564 MB / 32.51 min | 18853 MB / 28.97 min | 21670 MB / 47.11 min |\\n| MIP-counter| 29796 MB / 37.63 min |**OOM** | **OOM**|\\n| MIP-garden | 23115 MB / 40.82 min | 21297 MB / 39.72 min | **OOM** |\\n\\n**Q6(2). Assessment of attack success or failure.**\\n>`Q6(2) Lastly, it is unclear how the success or failure of this type of attack should be assessed. For example, if the service provider\\u2019s computational resources are sufficient to handle the increased demand, should the attack then be considered unsuccessful?`\\n\\nFollowing the previous question and our response to your **W2**, we consider an attack is more successful if it can introduce more extra computation cost due to the poison perturbation. Even if the attack does not cause the worst consequence which is denial-of-service, we can say such attack is successful if it over-consumes resources and degrades service quality for legitimate users.\\n\\nFor example, our unconstrained attacks on NeRF-Synthetic result in **17.29 times more GPU memory** and **4.77 times more training time** on average compared with unattacked. More clearly put, the computation resource for **training one time on our poisoned datasets** will consume resources enough for **82.47 times of normal training**.\"}", "{\"comment\": \"Thank you sincerely for your insightful comments. We will surely include all the additional results and discussions in the camera-ready version. Thank you again for your valuable feedback!\"}", "{\"title\": \"Response to Reviewer SqKM (split 3 of 4)\", \"comment\": \"**W5&Q5. Defense threshold of limiting Gaussian numbers.**\\n>`W5. The paper states a basic defense against Poison-Spat by limiting the number of Gaussians, but it does not specify the threshold for the number considered in this defense. It would be beneficial to include an evaluation that explores the effect of varying these Gaussian limits.`\\n\\n>`Q5. What is the Gaussian limit threshold in the basic defense, and how do varying limits affect the attack?`\\n\\nThank you for raising this question. In the paper we set the defense threshold to ensure a tight wrapping of necessary Gaussians on the clean scene. More specifically, for three scenes we show in Figure 5, we set defense threshold as follows:\\n\\n| Scene | #Gaussians on clean data | #Gaussians as defense threshold |\\n|:---:|:---:|:---:|\\n|Bicycle|5.793 Million|6 Million|\\n|Flowers|3.508 Million|4 Million|\\n|Room|1.513 Million|2 Million|\\n\\nSetting a uniform defense threshold for Gaussian number is challenging due to the significant variance in different scenes. A low threshold can safeguard the victim\\u2019s computational resources from over-consumption, but may substantially degrade the reconstruction quality for complex scenes. Choosing the defense threshold should consider this resource-quality tradeoff. Taking `room` of Nerf-Synthetic as an example, we further tested the defense threshold across 2 million to 7 million, as shown in the following table:\\n\\n| Defense threshold | Max GPU memory(MB) | Reconstruction PSNR(dB) |\\n|:---:|:----:|:---:|\\n| *Clean input* |12316|29.21|\\n|2 Million |14192|19.03| \\n|3 Million |16472|22.27|\\n|4 Million |18784|24.11| \\n|5 Million |23642|24.49|\\n|6 Million |36214|28.02|\\n|Poison + No Defense|46238|29.08|\\n\\nAs the defense threshold becomes tighter, the maximum GPU memory usage is confined within a safer range. However, this will cause a significant drop in the reconstruction\\u2019s PSNR, falling to as low as 19.03, which nearly makes it unusable. (c.f. Figure 6 (A)). **We add a new section I in Appendix, along with visualizations of different defense threshold in Figure 6.** Please refer to our revised paper.\\n\\n---\\n**W6. Precise definition of threat model.**\\n>`W6. The definition of the threat model for the Poison-Splat attack could be more precise, particularly in specifying attacker capabilities and constraints. Explicitly define white-box and black-box scenarios for the proxy model.`\\n\\n\\nThank you for the advice. We have put another section (**Appendix Section C**) in our revised paper to provide more clarity of the threat model.\\n\\nThroughout the paper, we assume the attacker has following goal, input, output, information and constraints:\\n\\n**Attacker Goal**: To increase the computation cost and over-consume computation resource of 3DGS service provider.\\n\\n**Attacker Input**: Clean data.\\n\\n**Attacker Output**: Poisoned data by running attack on the clean data.\\n\\n**Attacker Information**: Attacker doesn't need to know the underlying hardware device of the victim.\\n\\n - White-box: Attacker knows the specific 3DGS configuration of the victim.\\n - Black-box: Attacker only knows victim is using 3DGS, but does not know which variant of 3DGS representation and what configuration the victim is using. \\n\\n**Attacker Constraint**: Attacker can optionally choose to constrain the perturbation range $\\\\epsilon$ of his additive perturbation to add on clean data. \\n\\n**Attacker Algorithm**: Algorithm 1.\"}", "{\"title\": \"Response to Reviewer THFm (Split 1 of 5)\", \"comment\": \"We sincerely thank the reviewer for acknowledging the significance of our finding and the effectiveness of our proposed attack. In the following we address the main concerns.\\n\\n**W1 & Q4. While Poison-splat can lead to DoS, it lacks evaluation of inference phase.**\\n>`W1. It lacks evaluation of the attack's effects on the inference phase.`\\n\\n>`Q4. In the Experimental Analysis section, the authors conducted extensive experiments to analyze the impact of the attack on memory usage and training duration, which is commendable. However, while the abstract suggests that Poison-splat can lead to a DoS damage to the server, the Experimental Analysis section lacks any evaluation of the attack's effects on the inference phase.`\\n\\nThank you for raising this concern. First we still want to highlight that **our attack is designed particularly for the training phase**, and the denial-of-service is meant to happen in the training phase. Please refer to **Q4 of Reviewer SqKM** where denial-of-service is frequently encountered when training on RTX A5000 or RTX 4090. \\n\\n\\nAlthough it is a training-phase attack in design, we can still find some consequences of the attack in the inference phase. For example, we identify the slowing down of rendering speed featured by a significantly smaller FPS (c.f. rightmost columns of Table 1, Table 3 and Table 4). We conclude the average FPS drop in the following table:\\n\\n| Dataset - attack setting | Average Clean Render Speed (FPS) | Average Poisoned Render Speed (FPS) | Average slowing down |\\n|:---:|:---:|:---:|:---:|\\n|Nerf-Synthetic-eps16 | 352.250 | 165.125| 2.13x $\\\\downarrow$ |\\n|Nerf-Synthetic-unconstrained | 352.250 | 28.250|12.47x $\\\\downarrow$|\\n|MIP-Nerf360-eps16 | 130.333 | 56.778 | 2.30x $\\\\downarrow$|\\n|MIP-Nerf360-unconstrained | 130.333 | 19.778 | 6.59x $\\\\downarrow$|\\n\\n\\nOn the other hand, our attack can cause a higher storage overhead of trained 3DGS model during inference stage, which is self-evident as the final number of Gaussians are massively increased.\\n\\n| Dataset - attack setting | Average Clean Model Size (GB) | Average Poisoned Model Size (GB) | Average storage increase |\\n|:---:|:---:|:---:|:---:|\\n|Nerf-Synthetic-eps16 | 0.067 | 0.165 | 2.46x$\\\\uparrow$ |\\n|Nerf-Synthetic-unconstrained | 0.067 | 0.925 | 13.80x $\\\\uparrow$|\\n|MIP-Nerf360-eps16 | 0.740 | 1.615 | 2.18x $\\\\uparrow$|\\n|MIP-Nerf360-unconstrained | 0.740 | 3.906 | 5.28x $\\\\uparrow$|\\n\\n---\\n**W2. No intuitive metrics for evluating attack success/failure**.\\n>`W2. There are no intuitive metrics for evaluating the success or failure of this attack.`\\n\\nIn this paper, we consider the computation cost attack successful if the resulting computation cost exceeds that required for clean input. If the attacker wishes to apply higher standards to measure attack effectiveness, they can establish metrics based on scales of the original computation cost.\\n\\n\\nFor example, we can identify the GPU memory usage exceeding 100%, 150% and 200% as three levels of successful attack states. The attack success rate on these three levels can be concluded in the following table:\\n\\n\\n| Attack Dataset - setting | ASR@100%GPU | ASR@150%GPU | ASR@200%GPU |\\n|:--|:--:|:--:|:--:|\\n| NeRF Synthetic $\\\\epsilon = 16/255$ | 100% | 87.5%| 62.5%|\\n| MIP-NeRF360 $\\\\epsilon = 16/255$ | 100% | 77.8% | 33.3% |\\n| NeRF Synthetic unconstrained | 100% | 100% | 100% |\\n| MIP-NeRF360 unconstrained | 100% | 100% | 100% |\"}", "{\"title\": \"Response to Reviewer SqKM (split 1 of 4)\", \"comment\": \"Thanks for the comprehensive and constructive comments, particularly for recognizing our work is practical and thorough. We address all the concerns as follows.\\n\\n**W1&Q1. Poisoning ratio.**\\n>`W1. The paper frames the problem as a data poisoning attack. However, it does not clearly elaborate on the poisoning ratio required for Poison-Splat to be effective. Additionally, the implications of varying poisoning ratios, particularly their impact on the stealthiness and overall effectiveness of the attack, are not thoroughly discussed.`\\n\\n>`Q1. What poisoning ratio is needed for Poison-Splat to be effective, and how does it affect stealth and impact?`\\n\\nThank you for raising this question. In the main paper, we only consider 100% poison rate based on the assumption that attacker can fully control the data collection process, since 3D datasets are typically composed of images of the same scene from different angles.\\n\\nTo answer your question, we further explore how varying poisoning ratio can affect the attack\\u2019s effectiveness, as reported in the following table:\\n| Scene | Poison Ratio | Number of Gaussians | GPU memory (MB) | Training time (min) |\\n|-------------------|--------------|---------------------|------------------|---------------------|\\n| **NS-hotdog-eps16** | clean | 0.185 M \\u00b1 0.000 M | 3336 MB \\u00b1 11 MB | 8.57 min \\u00b1 0.30 min |\\n| | 20% | 0.203 M \\u00b1 0.004 M | 3286 MB \\u00b1 70 MB | 8.84 min \\u00b1 0.27 min |\\n| | 40% | 0.279 M \\u00b1 0.024 M | 3896 MB \\u00b1 286 MB | 9.79 min \\u00b1 0.53 min |\\n| | 60% | 0.501 M \\u00b1 0.054 M | 7367 MB \\u00b1 1200 MB| 11.33 min \\u00b1 0.82 min|\\n| | 80% | 0.806 M \\u00b1 0.018 M | 13621 MB \\u00b1 861 MB| 14.02 min \\u00b1 0.60 min|\\n| | 100% | 1.147 M \\u00b1 0.003 M | 29747 MB \\u00b1 57 MB | 17.43 min \\u00b1 0.61 min|\\n|----|-----|----------|----------------------|--------------------------|\\n| **MIP-counter-eps16**| clean | 1.195 M \\u00b1 0.005 M | 10750 MB \\u00b1 104 MB | 26.46 min \\u00b1 0.57 min |\\n| | 20% | 1.221 M \\u00b1 0.005 M | 11043 MB \\u00b1 141 MB | 27.41 min \\u00b1 0.85 min |\\n| | 40% | 1.358 M \\u00b1 0.030 M | 11535 MB \\u00b1 147 MB | 27.97 min \\u00b1 0.47 min |\\n| | 60% | 2.005 M \\u00b1 0.056 M | 13167 MB \\u00b1 227 MB | 28.87 min \\u00b1 0.52 min |\\n| | 80% | 3.273 M \\u00b1 0.119 M | 19578 MB \\u00b1 1417 MB | 32.99 min \\u00b1 0.57 min |\\n| | 100% | 4.628 M \\u00b1 0.014 M | 29796 MB \\u00b1 187 MB | 37.63 min \\u00b1 0.48 min |\\n\\nIt is evident that attack performance, in terms of both GPU memory usage and training time extension, becomes stronger as poisoning rate increases. Here we randomly selected the poisoned views and report the standard deviation across three individual runs. Please refer to **Table 12 in Appendix Section H** to see the full results.\\n\\n---\\n**W2&Q2. Smoothness threshold and its relationship with attack performance and stealth.**\\n>`W2. The paper mentions that Poison-Splat maximizes the number of Gaussians by enhancing the sharpness of 3D objects through controlling the smoothness factor. However, it is not clearly explained how the smoothness threshold is defined to ensure an effective attack. Additionally, the impact of this threshold on the stealthiness of the attack remains unclear.`\\n\\n>`Q2. How is the smoothness threshold defined, and how does it impact the stealth of the attack?`\\n\\nThank you for your question. We notice that the term \\\"smoothness threshold\\\" was not explicitly mentioned in our manuscript, and we speculate that you might be referring to what we termed as the \\\"perturbation range\\\" in our paper.\\n \\nGenerally, as the perturbation range increases, the attack becomes more effective but loses stealthiness due to greater distortion of image. The attacker's choice of perturbation range depends on their goals: if prioritizing attack effectiveness, they may opt for unlimited perturbations. If stealth is more important, they might chooses a smaller $\\\\epsilon$ to make the changes hard to detect.\\n\\nWe conducted an ablation study on the perturbation range $\\\\epsilon$ where the trend of attack effectiveness and stealth in relation to $\\\\epsilon$ is quite apparent. Please refer to **Table 10 and Table 11 in Appendix Section G** for detailed results.\"}", "{\"comment\": \"Dear Reviewer THFm,\\n\\nThank you for your time and effort in reviewing our paper. Given the limited time for discussion, we would appreciate it if you could let us know whether we have fully addressed your concerns; or if there are any further questions, we are happy to help resolve.\\n\\nBest regards,\\n\\nAuthor of #968\"}", "{\"title\": \"Response to Reviewer THFm (Split 2 of 5)\", \"comment\": \"**Q1.(1) Training provider has expectations to certain benchmark, would it be easy to detect?**\\n>`Q1(1). As a model training service provider, certain benchmarks, such as model size and typical training duration, are commonly understood. Given these expectations, would an attack that poisons samples to increase memory usage and training time be easy to detect?`\\n\\nFor the victim (online 3DGS service providers), the data they are dealing with are in-the-wild unseen data from various real-world customers, not from well-known, public benchmarks. In such real-world scenarios, the service provider actually has no clue to predict precise training cost in advance. \\n\\nWe tested our attack on a real-world 3DGS training provider platform, Polycam (https://poly.cam/tools/gaussian-splatting). We uploaded our poisoned images of `room` scene in MIP-NeRF360 dataset which is a public benchmark. However we didn't encounter effective detections for these poisoned data. The attack managed to increase the output 3D file size as well as service response time, and even triggered service failure. \\n\\n| Number of Views | Clean file size / response time | Poisoned file size / response time |\\n|:--:|:--:|:--:|\\n|20 | 15.8 MB / 16 min | 31.5 MB / 24 min|\\n|60 | 29.0 MB / 29 min | 46.1 MB / 56 min|\\n|70 | 32.4 MB / 33 min |**Service failed** |\\n|100| 40.2 MB / 45 min | **Service failed** |\\n\\n---\\n**Q1(2). Will adversarial sample detection methods be effective in detecting our poisoned samples?**\\n>`Q1(2). Additionally, could existing methods for detecting adversarial samples be effective in identifying these poisoned samples?`\\n\\nMany adversarial sample detection methods are heavily built on properties of classifiers, including those based on uncertainty/confidence[1][2][10], softmax/logits[3][4][5][6], gradients[11], or feature statistics[7][8][9]. These methods are specially designed for classification tasks and it is not straightforward to adapt to our scenario, where our victim model is a 3D generative model.\\n\\nFor other adversarial detection not relying on classification, for example methods based on natural image/training distribution statistics[12][13][14] or denoiser methods[15][16], their generalization ability to new data and their effectiveness against adaptive attacks are worth questioning and need further in-depth analysis. We leave the exploration of effective and generalizable detection methods for future work.\\n\\n***Reference***\\n\\n*[1] Feinman, Reuben, et al. \\\"Detecting adversarial samples from artifacts.\\\" arXiv preprint arXiv:1703.00410 (2017).*\\n\\n*[2] Smith, Lewis, and Yarin Gal. \\\"Understanding measures of uncertainty for adversarial example detection.\\\" UAI 2018.*\\n\\n*[3] Hendrycks, Dan, and Kevin Gimpel. \\\"A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks.\\\" ICLR 2022.*\\n\\n*[4] Aigrain, Jonathan, and Marcin Detyniecki. \\\"Detecting adversarial examples and other misclassifications in neural networks by introspection.\\\" arXiv preprint arXiv:1905.09186 (2019).*\\n\\n*[5] Monteiro, Jo\\u00e3o, et al. \\\"Generalizable adversarial examples detection based on bi-model decision mismatch.\\\" 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). IEEE, 2019.*\\n\\n*[6] Pang, Tianyu, et al. \\\"Towards robust detection of adversarial examples.\\\" NeurIPS 2018.*\\n\\n*[7] Cohen, Gilad, Guillermo Sapiro, and Raja Giryes. \\\"Detecting adversarial samples using influence functions and nearest neighbors.\\\" CVPR 2020.*\\n\\n*[8] Mao, Xiaofeng, et al. \\\"Learning to characterize adversarial subspaces.\\\" ICASSP 2020.*\\n\\n*[9] Lu, Jiajun, Theerasit Issaranon, and David Forsyth. \\\"Safetynet: Detecting and rejecting adversarial examples robustly.\\\" ICCV 2017.*\\n\\n*[10] Sotgiu, Angelo, et al. \\\"Deep neural rejection against adversarial examples.\\\" EURASIP Journal on Information Security 2020 (2020): 1-10.*\\n\\n*[11] Lust, Julia, and Alexandru Paul Condurache. \\\"GraN: an efficient gradient-norm based detector for adversarial and misclassified examples.\\\" arXiv preprint arXiv:2004.09179 (2020).*\\n\\n*[12] Kherchouche, Anouar, et al. \\\"Detection of adversarial examples in deep neural networks with natural scene statistics.\\\" 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020.*\\n\\n*[13] Grosse, Kathrin, et al. \\\"On the (statistical) detection of adversarial examples.\\\" arXiv preprint arXiv:1702.06280 (2017).*\\n\\n*[14] Lee, Kimin, et al. \\\"A simple unified framework for detecting out-of-distribution samples and adversarial attacks.\\\" NeurIPS 2018.*\\n\\n*[15] Meng, Dongyu, and Hao Chen. \\\"Magnet: a two-pronged defense against adversarial examples.\\\" Proceedings of the 2017 ACM SIGSAC conference on computer and communications security. 2017.*\\n\\n*[16] Song, Yang, et al. \\\"PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples.\\\" ICLR 2018.*\"}", "{\"summary\": \"The paper introduces Poison-splat, a data poisoning attack targeting the training phase of 3D Gaussian Splatting (3DGS). It exposes a vulnerability in the adaptive model complexity of 3DGS, showing how manipulated input data can significantly escalate computation costs during training, potentially resulting in a Denial-of-Service by consuming all available memory.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"A unique computation cost attack targeting 3D Gaussian Splatting.\", \"Highlights practical vulnerabilities in commercial 3D reconstruction services.\", \"Thorough experimentation across various datasets.\"], \"weaknesses\": [\"The paper frames the problem as a data poisoning attack. However, it does not clearly elaborate on the poisoning ratio required for Poison-Splat to be effective. Additionally, the implications of varying poisoning ratios, particularly their impact on the stealthiness and overall effectiveness of the attack, are not thoroughly discussed.\", \"The paper mentions that Poison-Splat maximizes the number of Gaussians by enhancing the sharpness of 3D objects through controlling the smoothness factor. However, it is not clearly explained how the smoothness threshold is defined to ensure an effective attack. Additionally, the impact of this threshold on the stealthiness of the attack remains unclear.\", \"Algorithm 1 indicates that generating backdoor samples with Poison-Splat requires a quadratic time complexity relative to the number of iterations, which raises concerns about the practicality of this approach during training.\", \"Would the Poison-Splat technique maintain its effectiveness when the 3DGS algorithm is trained in a multi-GPU environment? Additionally, the attack should be assessed on various GPUs with different clock frequencies and memory bandwidths to evaluate the generalizability of the approach.\", \"The paper states a basic defense against Poison-Spat by limiting the number of Gaussians, but it does not specify the threshold for the number considered in this defense. It would be beneficial to include an evaluation that explores the effect of varying these Gaussian limits.\", \"The definition of the threat model for the Poison-Splat attack could be more precise, particularly in specifying attacker capabilities and constraints. Explicitly define white-box and black-box scenarios for the proxy model.\", \"Why does the attack perform well on specific datasets in the white-box scenario but less effectively on others, such as the Tanks-and-Temples data (as shown in Table 1 and Table 3)? Can authors provide additional reasoning?\"], \"questions\": \"1. What poisoning ratio is needed for Poison-Splat to be effective, and how does it affect stealth and impact?\\n2. How is the smoothness threshold defined, and how does it impact the stealth of the attack?\\n3. Does the quadratic time complexity of Algorithm 1 raise practical concerns during training?\\n4. Is Poison-Splat still effective in multi-GPU settings, and how does it perform across GPUs with different specs?\\n5. What is the Gaussian limit threshold in the basic defense, and how do varying limits affect the attack?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We are thrilled to receive your acknowledgment of our work and rebuttal. Your questions have opened up new perspectives for us and helped us think more comprehensively. Thank you once again for your recognition, which is highly valuable and incredibly encouraging to us!\"}", "{\"title\": \"Response to Reviewer THFm (Split 3 of 5)\", \"comment\": \"**Q2. Attacking training phase or attacking inference phase? Are existing inference-phase attacks applicable to training phase?**\\n>`Q2. Are existing attack methods targeting the inference phase applicable to the training phase? If so, could these methods be included in the experimental analysis for comparison? If not, could the authors explain why?`\\n\\nWe need to clarify our attack is designed for attacking training phase of 3DGS. For further clarity, we provide a detailed description outlining the input/output of training / inference phases:\\n\\n+ **Training phase of 3DGS**: the model takes camera positions as input and outputs reconstructed view images. These reconstructed images are then compared with ground-truth view images to compute the reconstruction loss, which is subsequently used to update the model.\\n\\n+ **Inference phase of 3DGS**: the model only receives camera positions as input, but this time, it simply outputs the view image without any learning or updating process.\\n\\nIt is important to note that 3DGS does not have image inputs during inference. The only inputs are camera positions, which consist of a few extrinsic parameters. This setup is distinctly different from that of classifiers or neural networks. Consequently, in the inference stage, there are limited room to conduct attacks over 3DGS model.\\n\\nFor existing inference-stage attacks, most of them exploit model's limited generalization ability (e.g. adversarial attacks, jailbreak attacks) or leverage model's behavior differences across various data distribution areas (e.g. membership inference attacks, energy-latency attacks on language models). These attacks are specifically designed for networks that are already trained and widely deployed. Their goal is not to disrupt model training process itself, but to manipulate or degrade their performance during deployment. Therefore, directly applying these inference-based attacks to training phase of 3DGS is infeasible.\\n\\n---\\n**Q3. Required percentage of contamination to succeed.**\\n>`Q3. Since this attack targets the training phase, it would be helpful if the authors could analyze the required percentage of contamination in a clean dataset for the attack to succeed. `\\n\\nThank you for raising this question. In the main paper, we only consider 100% poison rate based on the assumption that attacker can fully control the data collection process, since 3D datasets are typically composed of images of the same scene from different angles.\\n\\nTo answer your question, we further explore how varying poisoning ratio can affect the attack\\u2019s effectiveness, as reported in the following table:\\n| Scene | Poison Ratio | Number of Gaussians | GPU memory (MB) | Training time (min) |\\n|:--|:--:|:--:|:--:|:--:|\\n| **NS-hotdog-eps16** | clean | 0.185 M \\u00b1 0.000 M | 3336 MB \\u00b1 11 MB | 8.57 min \\u00b1 0.30 min |\\n| | 20% | 0.203 M \\u00b1 0.004 M | 3286 MB \\u00b1 70 MB | 8.84 min \\u00b1 0.27 min |\\n| | 40% | 0.279 M \\u00b1 0.024 M | 3896 MB \\u00b1 286 MB | 9.79 min \\u00b1 0.53 min |\\n| | 60% | 0.501 M \\u00b1 0.054 M | 7367 MB \\u00b1 1200 MB| 11.33 min \\u00b1 0.82 min|\\n| | 80% | 0.806 M \\u00b1 0.018 M | 13621 MB \\u00b1 861 MB| 14.02 min \\u00b1 0.60 min|\\n| | 100% | 1.147 M \\u00b1 0.003 M | 29747 MB \\u00b1 57 MB | 17.43 min \\u00b1 0.61 min|\\n|------|----|--|---|-----|\\n| **MIP-counter-eps16**| clean | 1.195 M \\u00b1 0.005 M | 10750 MB \\u00b1 104 MB | 26.46 min \\u00b1 0.57 min |\\n| | 20% | 1.221 M \\u00b1 0.005 M | 11043 MB \\u00b1 141 MB | 27.41 min \\u00b1 0.85 min |\\n| | 40% | 1.358 M \\u00b1 0.030 M | 11535 MB \\u00b1 147 MB | 27.97 min \\u00b1 0.47 min |\\n| | 60% | 2.005 M \\u00b1 0.056 M | 13167 MB \\u00b1 227 MB | 28.87 min \\u00b1 0.52 min |\\n| | 80% | 3.273 M \\u00b1 0.119 M | 19578 MB \\u00b1 1417 MB | 32.99 min \\u00b1 0.57 min |\\n| | 100% | 4.628 M \\u00b1 0.014 M | 29796 MB \\u00b1 187 MB | 37.63 min \\u00b1 0.48 min |\\n\\nIt is evident that attack performance, in terms of both GPU memory usage and training time extension, becomes stronger as poisoning rate increases. Here we randomly selected the poisoned views and report the standard deviation across three individual runs. Please refer to **Table 12 in Appendix Section H** to see the full results.\"}", "{\"summary\": \"This paper proposes an adversarial attack against 3D Gaussian splatting, aiming at increasing the computational cost of this process. Their attack is based on the flexibility of this algorithm, in which the computational cost will change dynamically according to the input image features. Their attack named poison-splat leverages a proxy 3DGS model and the improvement of the total variation score to increase the number of gaussians required in computation, hence bring a huge computational cost regarding GPU memory usage and training time. Their evaluation has included both white-box and black-box attack results and discussed simple defense strategies.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This work identifies a new kind of vulnerability in 3DGS systems, which is the computational cost attack.\\n2. Authors have proposed an efficient algorithm to optimize a perturbation to increase the number of gaussians required in 3DGS.\\n3. The presentation of the paper is clear and easy to follow.\\n4. Evaluation results demonstrate the good attack performance in both black-box and white-box settings.\", \"weaknesses\": \"1. The constraint of the perturbation (epsilon = 16/255) seems large, and the quality of the resulted image could be affected. More ablation studies may be conducted to evaluate other constraint thresholds.\\n2. A simple defense might be smoothing the input images before conducting 3DGS, which seems an adaptive defense regarding your perturbations to the input. You may discuss or evaluate the effectiveness and negative impact of such defense.\", \"questions\": \"1. Since there are many online services using 3DGS, as you mentioned in the paper, have you evaluated the real-world attack performance of your technique on those application? Will the responding time be extended or causing deny of service?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors' detailed response and the effort they invested in providing new results. I am satisfied with their reply and have raised my rating.\"}", "{\"title\": \"Response to Reviewer ybEK (split 2 of 2)\", \"comment\": \"**Q1. Evaluate the attack on real-world 3DGS online services.**\\n>`Q1. Since there are many online services using 3DGS, as you mentioned in the paper, have you evaluated the real-world attack performance of your technique on those application? Will the responding time be extended or causing deny of service?`\\n\\nThank you for this constructive advise. We further evaluated our attack on the platform of a real-world 3DGS online service provider, Polycam (https://poly.cam/tools/gaussian-splatting). Polycam supports user to upload a set of images captured from various angles to create a Gaussian Splatting 3D model. \\n\\nPolycam supports user to upload 20-100 images for a specific scene. Since it is an online 3DGS service provided for user, we do not know the underlying GPU device, nor the specific 3DGS algorithm variant Polycam is running. We only upload the poisoned dataset we made using the proxy model, treating Polycam as a real-world black-box model. We can infer three information through the website interface: (1) whether training is successful (2) service responding time (3) the file size of the downloadable reconstructed 3DGS model.\\n\\nWe conducted tests of our attack using `room` scene from MIP-NeRF360. The scene has a total of 311 views, but since Polycam only supports between 20 to 100 views, we uploaded the first $v$ number of views for reconstruction and keep the uploaded views uniform for clean and poisoned cases. The service responses are recorded in the table below:\\n\\n\\n| Number of Views | Clean file size / response time | Poisoned file size / response time |\\n|:---:|:---:|:---:|\\n|20 | 15.8 MB / 16 min | 31.5 MB / 24 min|\\n|60 | 29.0 MB / 29 min | 46.1 MB / 56 min|\\n|70 | 32.4 MB / 33 min |**Service failed** |\\n|100| 40.2 MB / 45 min | **Service failed** |\\n\\nAlthough not knowing specific algorithm variant or running device used by Polycam, our attack successfully results in a more complex 3D model and extends training time. As input views cover more of the scene, both 3D model file size and training time increase; notably, when the number of views reached 70, while clean dataset still receive a response, out poisoned dataset encounter a service failure. Although we don't have direct access to memory consumption of the online service, the increasing trend in file size provides sufficient reason to believe that our poisoning exceeds the computation resource budget of a single reconstruction service. This evidence highlights the effectiveness of our Poison-splat attack in real-world scenarios.\"}", "{\"title\": \"Response to Reviewer SqKM (split 4 of 4)\", \"comment\": \"---\\n**W7. Why attack perform less effective on Tanks-and-Temples?**\\n>`W7. Why does the attack perform well on specific datasets in the white-box scenario but less effectively on others, such as the Tanks-and-Temples data (as shown in Table 1 and Table 3)? Can authors provide additional reasoning?`\\n\\nThank you for paying attention to this performance difference. We also noticed that Poison-Splat attack appears to be more effective on NeRF-Synthetic and MIP-NeRF360 compared to Tanks-and-Temples dataset. We propose a conjecture to explain this based on our observations and experience.\\n\\nWe noticed a big difference in the camera setups across these datasets. Specifically, camera poses in Tanks-and-Temples are more zoomed-in and close-up, while other two datasets are wider, panoramic views. This difference may influence the extent of object surfaces exposed to different camera angles. Intuitively, the more angles from which an object is viewed, the greater the opportunity for optimizing the Total Variation (TV) score. This is due to our bi-level optimization process where, in each iteration, a new camera view is sampled (Line 4 of Algorithm 1), and its TV score is optimized (Line 8 of Algorithm 1). Consequently, greater exposure of an object within the scene leads to more TV scores growth.\\n\\nOur empirical tests further support this conjecture. We found that more panoramic NeRF-Synthetic and MIP-NeRF360 datasets offers more opportunity to optimize TV scores during poisoning compared to Tanks-and-Temples, leading to higher TV score growth, and more effective attack performance.\\n\\n| Dataset | TV Score Growth| #Gaussians Growth| GPU Memory Growth | Training Time Growth|\\n|:---:|:---:|:---:|:---:|:----:|\\n| NeRF-Synthetic | 4.36x | 2.68x | 3.13x | 1.52x|\\n| MIP-NeRF360 | 3.18x | 2.81x | 1.97x | 1.43x|\\n| Tanks-and-Temples | 2.59x | 1.71x | 1.35x | 1.27x|\"}", "{\"title\": \"Response to Reviewer ybEK (split 1 of 2)\", \"comment\": \"Thanks for the effort in reviewing our paper, we are glad that our novelty, efficiency, presentation and good performance are recognized. We address your questions as follows.\\n\\n**W1. Other constraint thresholds.**\\n>`W1. The constraint of the perturbation (epsilon = 16/255) seems large, and the quality of the resulted image could be affected. More ablation studies may be conducted to evaluate other constraint thresholds.`\\n\\nThank you for raising this point. To evaluate the trade-off between attack performance and image quality, we further conducted experiments of different perturbation constraints $\\\\epsilon=8/255$ and $\\\\epsilon=24/255$. It is evident that a larger perturbation range boosts the attack performance, consuming more GPU memory and longer training time. However, it also results in higher image distortion, as indicated by lower SSIM and PSNR values compared with clean images.\", \"we_show_one_scene_from_each_of_nerf_synthetic_and_mip_nerf360\": \"| Attack setting | Number of Gaussians | GPU memory | Training time | SSIM | PSNR |\\n|------------------|:---:|:---:|:---:|:---:|:---:|\\n| **Nerf-Synthetic-ship** | | | | | |\\n| clean | 0.272 M | 3692 MB | 8.87 min | - | - |\\n| $\\\\epsilon=8/255$ | 0.516 M | 5574 MB | 11.01 min | 0.55 | 33.42 |\\n| $\\\\epsilon=16/255$| 1.071 M | 16666 MB | 16.12 min | 0.33 | 27.50 |\\n| $\\\\epsilon=24/255$| 1.365 M | 29828 MB | 18.46 min | 0.24 | 24.03 |\\n| unconstrained | 4.317 M | 80956 MB | 44.11 min | 0.04 | 5.31 |\\n| **MIP-NeRF360-counter** | | | | | |\\n| clean | 1.195 M | 10750 MB | 26.46 min | - | - |\\n| $\\\\epsilon=8/255$ | 1.739 M | 15133 MB | 28.06 min | 0.80 | 32.24 |\\n| $\\\\epsilon=16/255$| 4.628 M | 29796 MB | 37.63 min | 0.52 | 26.78 |\\n| $\\\\epsilon=24/255$| 6.649 M | 47607 MB | 43.68 min | 0.35 | 23.45 |\\n| unconstrained | 11.167 M | 80732 MB | 62.04 min | 0.01 | 6.64 |\\n\\nPlease refer to the **full result in Appendix Section G, Table 10 and Table 11** in our revised paper.\\n\\n---\\n**W2. Smoothing the input images can be a defense.**\\n>`W2. A simple defense might be smoothing the input images before conducting 3DGS, which seems an adaptive defense regarding your perturbations to the input. You may discuss or evaluate the effectiveness and negative impact of such defense.`\\n\\nThank you for this constructive suggestion. We will show that image smoothing as a pre-processing to Gaussian Splatting training procedure is not an ideal defense. Without a reliable detection method, defenders may only resort to universally applying image smoothing to all incoming data. This pre-processing will significantly compromise reconstruction quality.\\n\\nWe found that although image smoothing may reduce GPU consumption to some extent, it severly undermines efforts to preserve fine image details[1][2][3]. As illustrated in **Figure 7 in Appendix Section J**, applying common smoothing techniques such as Gaussian filtering or Bilateral filtering[4] leads to a substantial degradation in reconstruction quality. For instance, on the `chair` scene of NeRF-Synthetic dataset, reconstruction achieves 36.91 dB PSNR without pre-processing; however, with Gaussian or Bilateral filtering, the PSNR drops sharply to around 25 dB. This level of degradation is clearly undesirable. Given these challenges, we urge the community to develop more sophiscated detection and defense mechanisms against computation cost attacks, balancing resource consumption with the preservation of image quality.\\n\\n*Reference*\\n\\n*[1] Yu, Zehao, et al. \\\"Mip-splatting: Alias-free 3d gaussian splatting.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.*\\n\\n*[2] Ye, Zongxin, et al. \\\"Absgs: Recovering fine details in 3d gaussian splatting.\\\" Proceedings of the 32nd ACM International Conference on Multimedia. 2024.*\\n\\n*[3] Yan, Zhiwen, et al. \\\"Multi-scale 3d gaussian splatting for anti-aliased rendering.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.*\\n\\n*[4] Tomasi, Carlo, and Roberto Manduchi. \\\"Bilateral filtering for gray and color images.\\\" Sixth international conference on computer vision (IEEE Cat. No. 98CH36271). IEEE, 1998.*\"}", "{\"metareview\": \"This paper proposed a new security vulnerability in 3D Gaussian Splatting. All reviewers provides positive feedback of this paper. AC read all reviewers and rebuttal and recommend this paper as a spotlight paper. This paper also show the practical usage of this attack which is also very important and good. AC hopes the authors can revised the paper based on reviewers' comments.\", \"additional_comments_on_reviewer_discussion\": \"Before rebuttal, reviewers have concerns about key contribution, missing experiments and ablation studies, missing computational cost, the attack performances on real-world 3DGS online services and so on. Authors did a great job by conducting a large amount of experiments to address reviewers' concerns. Finally, almost all reviewers have increased their scores. AC hope authors to reorganized the paper to add all these important experiments in the revised version.\"}", "{\"summary\": \"This work reveals a major security vulnerability that has been overlooked in 3D Gaussian Splatting (3DGS): the computational cost of training 3DGS can be maliciously manipulated by poisoning the input data. This paper introduces a novel attack, termed \\\"Poison,\\\" in which an adversary can poison the input images, thereby significantly increasing the memory and computational time required for 3DGS training, ultimately pushing the algorithm to its highest computational complexity. In extreme cases, the attack can even exhaust all available memory, leading to a denial-of-service (DoS) event on the server and causing real harm to 3DGS service providers.\\nThe attack is modeled as a two-layer optimization problem, addressed through three strategies: attack target approximation, proxy model rendering, and optional constrained optimization. This proposed approach not only ensures the effectiveness of the attack but also makes it challenging for simple, existing defenses to succeed. This novel attack aims to raise awareness of the potential vulnerabilities in 3DGS systems.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. It reveals a major security vulnerability that has been overlooked in 3DGS.\\n2. The attack's effectiveness is validated across various datasets.\", \"weaknesses\": \"1. It lacks evaluation of the attack's effects on the inference phase.\\n2. There are no intuitive metrics for evaluating the success or failure of this attack.\", \"questions\": \"(1) As a model training service provider, certain benchmarks, such as model size and typical training duration, are commonly understood. Given these expectations, would an attack that poisons samples to increase memory usage and training time be easy to detect? Additionally, could existing methods for detecting adversarial samples be effective in identifying these poisoned samples?\\n(2)Are existing attack methods targeting the inference phase applicable to the training phase? If so, could these methods be included in the experimental analysis for comparison? If not, could the authors explain why?\\n(3) Since this attack targets the training phase, it would be helpful if the authors could analyze the required percentage of contamination in a clean dataset for the attack to succeed.\\n(4) In the Experimental Analysis section, the authors conducted extensive experiments to analyze the impact of the attack on memory usage and training duration, which is commendable. However, while the abstract suggests that Poison-splat can lead to a DoS damage to the server, the Experimental Analysis section lacks any evaluation of the attack's effects on the inference phase.\\n(5) Could the authors add more common models to compare the effectiveness of black-box and white-box attacks?\\n(6) The primary objective of the attack is to consume excessive computational resources, but what are the most serious consequences of this? Is it a denial-of-service (DoS) scenario? Additionally, how difficult would it be to achieve significant damage? For instance, if an attacker aims to prevent a model from completing its training, would this require prior knowledge of the service provider's computational resources? Lastly, it is unclear how the success or failure of this type of attack should be assessed. For example, if the service provider\\u2019s computational resources are sufficient to handle the increased demand, should the attack then be considered unsuccessful?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper discovers a security vulnerability in 3DGS. It shows that the computation cost of\\ntraining 3DGS could be maliciously tampered by poisoning the input data. An attack named Poison-splat is presented.\\n\\nI have read the response of the authors and the comments of other reviewers. I would recommend weak accept.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well written and organized.\\n2. The paper reveals that the flexibility in model complexity of 3DGS can become a security backdoor, making it vulnerable to computation cost attack.\\n3. Attacks are formulated and extensive experiments are conducted.\", \"weaknesses\": \"1. Is the attack practically feasible in real-world scenarios, or is it only feasible in theory?\\n2. In the work, the authors approximate the outer maximum objective with the number of Gaussians, which appears to be a theoretical assumption that may not apply in real-world scenarios.\", \"questions\": \"My concerns mainly lie in the practically feasible of the proposed attack.\\n1. Is the attack practically feasible in real-world scenarios, or is it only feasible in theory?\\n2. In the work, the authors approximate the outer maximum objective with the number of Gaussians, which appears to be a theoretical assumption that may not apply in real-world scenarios.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"comment\": \"Dear Reviewer gaGQ,\\n\\nThank you for your time and effort in reviewing our paper. Given the limited time for discussion, we would appreciate it if you could let us know whether we have fully addressed your concerns; or if there are any further questions, we are happy to help resolve.\\n\\nBest regards,\\n\\nAuthor of #968\"}", "{\"comment\": \"Dear Reviewer SqKM,\\n\\nThank you for your time and effort in reviewing our paper. Given the limited time for discussion, we would appreciate it if you could let us know whether we have fully addressed your concerns; or if there are any further questions, we are happy to help resolve.\\n\\nBest regards,\\n\\nAuthor of #968\"}", "{\"comment\": \"Dear Reviewer ybEK,\\n\\nThank you for your time and effort in reviewing our paper. Given the limited time for discussion, we would appreciate it if you could let us know whether we have fully addressed your concerns; or if there are any further questions, we are happy to help resolve.\\n\\nBest regards,\\n\\nAuthor of #968\"}" ] }
Exnt2DcdKD
NIRANTAR: Continual Learning with New Languages and Domains on Real-world Speech Data
[ "Tahir Javed", "Kaushal Santosh Bhogale", "Mitesh M Khapra" ]
We present Nirantar based on a large-scale effort to collect extempore and conversational speech data from participants spanning 22 languages across diverse locations in India. Given the extensive number of languages and locations involved, data is collected in incremental batches. Each batch introduces new languages, new domains (locations), or both, creating a practical playground for continual learning (CL). Nirantar contains a total of 3250 hours of human-transcribed speech data covering 208 Indian districts across 22 languages, with 1720 hours newly released as a part of this work. The data inflow and resulting multilingual multi-domain episodes are based on real-world data collection rather than simulated episodes commonly found in existing CL datasets. In particular, the amount of data collected and the number of languages and domains involved are not uniform across episodes, reflecting a practical and real-world continual learning scenario. This dataset serves as a playground for training and evaluating CL approaches in three different scenarios: Language-Incremental (LIL), Domain-Incremental (DIL), and the novel Language-Incremental Domain-Incremental Learning (LIDIL), which has not been studied before. To establish the dataset's usefulness, we evaluate several existing CL approaches within these scenarios. Our findings indicate that the behaviour of these algorithms varies across the three scenarios, emphasizing the need for detailed independent studies of each.
[ "continual learning", "speech", "recognition", "datasets", "indian languages", "multilingual asr" ]
Reject
https://openreview.net/pdf?id=Exnt2DcdKD
https://openreview.net/forum?id=Exnt2DcdKD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yMbrNqRlfQ", "rzRLTZWMoX", "nka0qCPAiX", "miOf4GctIW", "jjhDomXHeB", "jdIwBJZhiQ", "ePBkwt7PvM", "cbGPJc3lED", "b7OP8l4YSq", "YjnrYP5nQg", "WssN3EmW4o", "VqmHF0JEr7", "Ul0e8jUX9B", "M0EmhtokmP", "KJ3nOsERqz", "K4Dwosw6rv", "K0eMHBOxyo", "EthxTpD2vn", "C07g1Gle8h", "9jJaL7hSlJ", "2a2AeqOglk" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "meta_review", "official_review" ], "note_created": [ 1732249682066, 1732249927104, 1732620299487, 1732249878052, 1730540674603, 1732987327120, 1732592475540, 1730763025316, 1732597039492, 1732250503831, 1732250547239, 1732250011545, 1730722075540, 1730709182440, 1732249772946, 1732250129043, 1733226174861, 1733206860534, 1737524271441, 1734633407870, 1730702410306 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13603/Authors" ], [ "ICLR.cc/2025/Conference/Submission13603/Authors" ], [ "ICLR.cc/2025/Conference/Submission13603/Authors" ], [ "ICLR.cc/2025/Conference/Submission13603/Authors" ], [ "ICLR.cc/2025/Conference/Submission13603/Reviewer_YNAF" ], [ "ICLR.cc/2025/Conference/Submission13603/Authors" ], [ "ICLR.cc/2025/Conference/Submission13603/Reviewer_JqUP" ], [ "ICLR.cc/2025/Conference/Submission13603/Reviewer_6Bxs" ], [ "ICLR.cc/2025/Conference/Submission13603/Reviewer_oRB3" ], [ "ICLR.cc/2025/Conference/Submission13603/Authors" ], [ "ICLR.cc/2025/Conference/Submission13603/Authors" ], [ "ICLR.cc/2025/Conference/Submission13603/Authors" ], [ "ICLR.cc/2025/Conference/Submission13603/Reviewer_JqUP" ], [ "ICLR.cc/2025/Conference/Submission13603/Reviewer_XYQq" ], [ "ICLR.cc/2025/Conference/Submission13603/Authors" ], [ "ICLR.cc/2025/Conference/Submission13603/Authors" ], [ "ICLR.cc/2025/Conference/Submission13603/Authors" ], [ "ICLR.cc/2025/Conference/Submission13603/Reviewer_oRB3" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13603/Area_Chair_NB1d" ], [ "ICLR.cc/2025/Conference/Submission13603/Reviewer_oRB3" ] ], "structured_content_str": [ "{\"title\": \"Response to reviewer 6Bxs\", \"comment\": \"We sincerely thank the reviewer for their useful feedback, please find our responses below.\\n \\n Items marked with ** indicate hyperlinks\\n\\n**W1: \\u201c... contribution here feels mostly incremental \\u2026\\u201d**\\n\\nWe have cited the prior works, including CL-MASR, Sadhu et al., Chang et al., and Li et al., and duly acknowledged their contributions in the related work section. Indeed, the lack of comprehensive settings that integrate both language and domain shifts, realistic episodic designs, and evolving benchmarks in existing datasets motivates our work. Specifically, in contrast to these works, our work makes the following contributions:\\n- **A novel LIDIL setting**: Nirantar facilitates the combined study of language and domain shifts through a language-incremental-domain-incremental (LIDIL) setting. This interplay has not been explored in prior works, adding a new dimension to continual learning research.\\n- **Data episodes designed from Real-World use-cases of CL**: Unlike prior works that rely on episodes crafted synthetically from existing datasets, the data episodes in Nirantar reflect real-world domain and linguistic variability.. This better represents the diversity encountered in practical applications.\\n- **Longer Episodic Sequences**: Prior works are limited to shorter episodic sequences (fewer than 4 episodes for DIL and fewer than 10 for TIL). In contrast, Nirantar enables experiments over 11 episodes across all three scenarios (LIL, DIL, and LIDIL), providing a more robust comparison of CL approaches for extended learning scenarios.\\n- **An Evolving Evaluation Benchmark**: Nirantar introduces an evolving benchmark by continuously adding 15-minute samples to the test set as new data is collected. Additionally, since the test data is sampled at the district level, it naturally supports evaluation in an episodic setting.\\n\\nFurthermore, [Table** 3](https://anonmyous.objectstore.e2enetworks.net/nirantar-related-datasets.png) (also included in the appendix of the revised manuscript) presents a comparative overview of relevant datasets that can be used in LIL, DIL and LIDIL scenarios. It should be noted that from this table, ours is the only dataset which contains audio samples collected from the field which are manually transcribed and have natural episodes. \\n\\n**W2: \\u201c... Expanding the dataset with additional languages from public datasets \\u2026\\u201d**\\n\\nWe agree that adding more languages from the public datasets and analyzing language relationships and similarities could provide additional insights. However, we would like to highlight some limitations for the suggested datasets:\\n- While FLEURS provides linguistic diversity with data for 102 languages, it primarily consists of read speech, derived from the dev and devtest splits of the FLoRes dataset. This design imposes significant constraints on its utility for studying CL paradigms:\\n - It lacks domain information to enable domain-incremental and language-incremental-domain-incremental learning.\\n - Due to its design, the dataset contains 10 hours of training data per language, which is not sufficient to use it even in the language-incremental learning setting.\\n- Although Common Voice is large and diverse, consisting of data for 131 languages, it lacks domain information, making it unsuitable for studying domain-incremental learning (DIL) or the language-incremental-domain-incremental (LIDIL) setting.\\n\\nWe have also reviewed other existing publicly available datasets, but none meet the requirements for evaluating CL models across all three scenarios (LIL, DIL, and LIDIL). We once again point to [Table** 3](https://anonmyous.objectstore.e2enetworks.net/nirantar-related-datasets.png) (also included in the appendix of the revised manuscript) which presents a comparative overview of relevant datasets that can be used in LIL, DIL and LIDIL scenarios.\"}", "{\"title\": \"Response to reviewer JqUP (continue)\", \"comment\": [\"**Q1 & Q2: \\\"Compensation, consent, guidelines\\\"**\", \"We will include additional details (participant instructions mentioned below) and sample data examples in the appendix for clarity.\", \"Referring to Section 7, we would like to highlight that the data collection process adheres to the guidelines established for IndicVoices [1]. This process was thoroughly reviewed and approved by the Institute Ethics Committee. Participants were fully informed about the data collection, their involvement, and the use of their data, and their consent was obtained beforehand. They received compensation aligned with local daily wages for their time and effort. No PII data will be shared externally, and measures were implemented to anonymize and protect sensitive information. Project staff were also compensated appropriately. Nirantar will be released under the CC-BY-4.0 license, permitting commercial use.\", \"The following refers to the instructions shared to the Participants\", \"**Aim of the Project**: The project aims to collect data for developing and evaluating speech technology tailored to your language. This technology includes everyday applications explained by the coordinator.\", \"**Purpose of Data Collection**: Your data will be used for both commercial and non-commercial purposes to improve speech technology for your language.\", \"**Amount of Time**: The process will take approximately 1 to 4 hours.\", \"**Compensation**: You will be compensated with INR X, as communicated by the coordinator. This amount X is equivalent to the local half-day wage in the region.\", \"**Consent**: By participating, you agree to the terms outlined, including the use of your data as described. Please proceed only if you consent and sign the consent form.\", \"**Registration**: Your details will be collected during registration. To ensure privacy, none of this information will be shared with third parties. You are required to upload your signed consent form as part of the registration process.\", \"**App Installation**: Download the \\u201c<Anonymous>\\u201d app from the Google Play Store on your Android device.\", \"**Login**: Log in using the access code assigned to you at registration. Verify this code with the coordinator.\", \"**Fetch Tasks**: Click \\u201cSubmit Tasks/Get New Tasks\\u201d to view your assigned tasks.\"], \"recording\": [\"Click on each task, read the prompt, and record your response. If unsure about a task, ask the coordinator for clarification. Take breaks as needed and complete tasks at your own pace. You may skip questions if uncomfortable.\", \"**Submit a Task**: After recording, click \\u201cStop\\u201d to replay your response. If you and/or the coordinator are satisfied, click \\\"Next\\\". If not, re-record.\", \"**Two-party Conversations**: After completing app tasks, you\\u2019ll be paired with another participant for a scenario-based conversation. You can select your partner, role, and scenario from available options shared by the coordinator.\", \"**Logging Out**: Once all tasks are complete, return to the home screen and click \\u201cSubmit Tasks\\u201d. Wait for confirmation, then record a video for identity verification.\", \"[1] Javed, T., Nawale, J. A., George, E. I., Joshi, S., Bhogale, K. S., Mehendale, D., ... & Khapra, M. M. (2024). IndicVoices: Towards building an Inclusive Multilingual Speech Dataset for Indian Languages. arXiv preprint arXiv:2403.01926.\"]}", "{\"title\": \"Response to Reviewer oRB3\", \"comment\": [\"We would like to clarify the distinct contributions of Nirantar, emphasizing that its novelty goes far beyond simply increasing the volume of data relative to IndicVoices. While data volume is one axis of comparison, Nirantar introduces multiple new dimensions that were not within the scope of IndicVoices. Below, we outline the key differences:\", \"**Distinct Purpose and Focus on Continual Learning (CL):** IndicVoices was designed as a foundational dataset for building multilingual ASR systems and was never intended to address the challenges of continual learning. Its primary goal was to establish a comprehensive dataset for static model training. Recognizing the practical challenge of training downstream models with incremental data, Nirantar explicitly incorporates a continual learning (CL) framework as one of its core goals. This focus fundamentally shifts the purpose of the dataset from static ASR development to exploring the complexities of training models incrementally with multilingual and multi-domain data inflow.\", \"**Beyond Dataset Introduction: A Novel CL Framework:** Nirantar does not simply introduce a dataset but establishes a new lens for research in CL. It supports three Incremental Learning (IL) scenarios: Language-Incremental (LIL), Domain-Incremental (DIL), and the novel Language-Incremental Domain-Incremental Learning (LIDIL). LIDIL, in particular, has never been studied in the literature, making this work a first-of-its-kind contribution to CL research. This dimension of the work was entirely absent in IndicVoices, which did not address CL scenarios at all.\", \"**Curated Test Sets for Continual Evaluation:** Nirantar introduces carefully curated test sets designed for continual evaluation. These test sets ensure that models trained incrementally can be assessed meaningfully, both during development and in future work. In contrast, IndicVoices provides a static test set (5 hours per language) that is well-suited for general-purpose ASR evaluation but entirely unsuitable for studying CL scenarios. This distinction highlights the forward-looking intent of Nirantar to enable robust continual evaluation over time.\", \"**Data Collection and Scale:** While the collection process for Nirantar builds upon the methodology introduced in IndicVoices, achieving the increased scale required significant additional effort. The sheer logistical and operational complexity of adding this volume should not be discounted, even if the same underlying collection process was followed. Scaling to such an extent, while maintaining quality and diversity, required a substantial effort.\", \"**Summary**\", \"While the collection process overlaps with IndicVoices, Nirantar represents a fresh look at the problem by addressing entirely new challenges and enabling research that was not in the intended scope of IndicVoices. Its focus on continual learning, novel IL scenarios, and tailored evaluation frameworks firmly establish Nirantar as a distinct and substantial contribution, not merely an incremental addition to IndicVoices.\"]}", "{\"title\": \"Response to reviewer JqUP\", \"comment\": \"We sincerely thank the reviewer for their useful feedback, please find our responses below.\\n \\n Items marked with ** indicate hyperlinks\\n\\n**W1: \\u201cLimited baselines, architecture based methods\\u201d**\\n\\nAs discussed in Section 1 of the paper, in continual learning (CL) settings involving a large number of languages and domains, architecture-based approaches can lead to model bloat and unnecessary complexity. However, based on the reviewer\\u2019s suggestion, we explored this approach in the LIL scenario, where we added up to 11 adapters (one for every new language). These adapters were integrated into each Conformer block of the Conformer-L model, with a bottleneck dimension of 64, resulting in an additional 1 million parameters per language. The results of this experiment are presented in [Figure** 10](https://anonmyous.objectstore.e2enetworks.net/nirantar-lil-adapters.png) (also included in the appendix of the revised manuscript):\\n- The Adapters method outperforms all other CL approaches, except for ER, in terms of AMER and Backward Transfer. This is primarily because each new episode adds an adapter layer, which prevents forgetting during the training process, as each episode trains a different adapter without modifying the base model. Interestingly, the difference between Joint FT and Adapters can be attributed to the number of parameters involved. We believe that increasing the adapter's bottleneck dimension to expose more trainable parameters could further reduce the gap. \\n- Forward transfer is worse for adapters because they are specifically tuned for individual languages in each episode, without facilitating knowledge transfer to future episodes. This limits the ability to leverage shared knowledge across languages and domains, which could benefit subsequent tasks. \\n- Adapters exhibit the highest Intransigence Measure, as the entire backbone stays frozen, and only the language-specific adapters are updated during each episode. This introduces rigidity, but it also helps mitigate catastrophic forgetting. That said, during this experiment, the number of parameters increased by 11 million (1 million per episode). If extended to a domain-incremental or language-incremental-domain-incremental setting, the parameter count could reach an order of magnitude of O(100), making it impractical for real-world applications.\\n\\nWe will add the above results to the final version of the paper and highlight that ER based approaches still remain a more feasible alternative as it performs better on most metrics and is applicable in all the three settings (LIL, DIL, LIDIL). Thank you again for suggesting this experiment as we believe it has helped make our analysis more comprehensive. \\n\\n**W2: \\u201cCross-lingual transfer\\u201d**\\n\\nWe study the cross-lingual transfer of information for two language families, Indo-Aryan and Dravidian, in the LIDIL setting. [Figures** 11 to 13](https://anonmyous.objectstore.e2enetworks.net/nirantar-crosslingual-lidil.png) (also included in the appendix of the revised manuscript) illustrates the joint results (row 1) as well as results spliced by language families (row 2 and row 3). It can be clearly seen that the Average MER (AMER) and Backward Transfer (BT) for the Dravidian languages are better than for Indo-Aryan languages. This could imply that Dravidian languages are more related to each other than Indo-Aryan languages, facilitating better transfer of information. Additionally, the smaller number of languages in the Dravidian group could also contribute to these improved results, as fewer languages may reduce the complexity and help the model focus on the shared linguistic features within this group. The overall trends, however, suggest that Dravidian languages show more stable learning behavior in the LIDIL setting compared to Indo-Aryan languages.\\n\\nWe thank the reviewer for suggesting the possibility of exploring crosslingual transfer. We will add the above analysis and results to the final version of the paper and hope that this helps in addressing this important question.\"}", "{\"summary\": \"The paper introduce Nirantar, a large-scale dataset designed for continual learning (CL) in ASR, spanning 22 Indian languages and over 3,250 hours of human-transcribed speech. This dataset includes diverse domains, represented by 208 districts across India, to model real-world multilingual, multidomain CL challenges.\\n\\nNirantar's primary contributions include establishing 3 CL scenarios\\u2014Language-Incremental Learning (LIL), Domain-Incremental Learning (DIL), and Language-Incremental Domain-Incremental Learning (LIDIL). The LIDIL scenario is novel, introducing both new languages and domains across episodes, a setup that reflects real-world data inflow patterns and has not been previously explored in CL.\\n\\nTo validate the dataset's utility, the authors benchmark several existing CL techniques (including Elastic Weight Consolidation, Experience Replay, and Memory-Aware Synapse) across the 3 scenarios. Experiment results suggest potential areas for improvement in CL approaches for multilingual, multidomain ASR.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"### Originality\\nThe Nirantar dataset presents a valuable resource for the speech research communities. With 22 Indian languages and over 3,250 hours of human speech, including 1,720 hours newly released as part of this work, and human-annotated transcriptions, the dataset spans a wide variety of topics, domains, and conversational scenarios. The paper introduces a novel Language-Incremental Domain-Incremental Learning (LIDIL) scenario, expanding beyond the traditional Language-Incremental (LIL) and Domain-Incremental (DIL) scenarios that form the three core scenarios of the Nirantar dataset. The inclusion of LIDIL sets a new benchmark for future studies in multilingual and multidomain CL under real-world scenarios.\\n\\n### Quality\\nThe paper is structured-well and offer a clear overview of the data collection, scenarios, and experimental evaluations on existing CL approaches.\\n\\n### Clarity\\nThe authors present a clear breakdown of 3 CL scenarios.\\n\\n### Significance\\nThe dataset\\u2019s breadth across 22 Indian languages and 208 districts makes it a valuable resource for developing speech models adaptable to language and domain shifts.\", \"weaknesses\": \"The paper does not report pre-CL performance metrics, such as Word Error Rate (WER,) for individual episodes across languages and domains, which limits understanding of Nirantar\\u2019s distinctiveness as a CL dataset. Specifically, existing ASR datasets like GigaSpeech 2 could be reorganized to create a comparable multi-domain, multilingual LIDIL dataset, so to strengthen its case as a novel CL dataset, additional evidence is needed. For instance, including initial statistics per language, domain, and episode would provide insights into each batch's difficulty across diverse linguistic and regional characteristics, highlighting its uniqueness as a new ASR CL dataset. Additionally, including initial ASR WER would help clarify Nirantar's quality as an ASR dataset.\", \"questions\": \"### Dataset Quality:\\n1. Could you provide details on the quality assurance procedures and any pre-CL performance metrics, such as initial Word Error Rate (WER), to validate transcription accuracy across languages/domains/episodes? Understanding these methods and statistics would clarify the dataset\\u2019s reliability and distinctiveness for continual learning applications.\\n\\n2. Could you elaborate on the procedures to ensure topic and domain diversity across districts, and how this diversity is measured or validated in different episodes? These insights would help demonstrate the dataset\\u2019s capacity to represent unique linguistic and regional characteristics, supporting its role as a comprehensive ASR CL dataset.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Gentle reminder to all reviewers\", \"comment\": \"We would like to sincerely thank all the reviewers for their thoughtful feedback and constructive suggestions. We have made every effort to address the comments and improve the manuscript. We are also grateful to reviewers **JqUP** and **oRB3** for their prompt replies to our initial comments. As the deadline for the author-reviewer discussion phase (**Dec 2nd**) is approaching, we kindly request reviewers **6Bxs**, **XYQq**, **YNAF**, and **oRB3** to check our responses and let us know if any further clarifications are needed from our side. If all the concerns are addressed we request the reviewers to appropriately increase their scores.\"}", "{\"comment\": \"Thank you for your detailed and thoughtful responses. I also took the time to read the comments and responses provided to other reviewers.\\n\\nRegarding the issues of compensation and user consent, I am personally satisfied with the steps taken to address these concerns. However, I defer to the Ethics Reviewer for the final decision, as I don't have sufficient experience in this area and cannot assess whether the measures meet the conference's thresholds.\\n\\nI appreciate the additional experiments with adapters and the expanded discussion on cross-lingual transfer, which significantly enhance the analysis and strengthen the paper's contributions.\\n\\nI maintain my original score, I believe the paper should be accepted.\"}", "{\"summary\": \"Nirantar is a large-scale dataset featuring 3,250 hours of human-transcribed speech across 22 languages from 208 districts in India, with 1,720 hours newly released in this work. Collected in incremental batches introducing new languages and locations, Nirantar creates a real-world continual learning (CL) setting that supports training and evaluation in Language-Incremental (LIL), Domain-Incremental (DIL), and the novel Language-Incremental Domain-Incremental Learning (LIDIL) scenarios. Initial evaluations show that CL approaches behave differently across these scenarios, highlighting the dataset's utility for diverse CL research.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The dataset encompasses a diverse range of languages across India.\", \"They offer three distinct continual learning settings: Language-Incremental (LIL), Domain-Incremental (DIL), and the novel Language-Incremental Domain-Incremental Learning (LIDIL).\", \"The paper is well-written and easy to follow, with a comprehensive analysis included.\"], \"weaknesses\": [\"The first two settings, Language-Incremental and Domain-Incremental, have been explored in previous studies, including CL-mARS, which covers various CL methods in multilingual contexts (https://arxiv.org/pdf/2310.16931), and works by Sadhu et al. (https://www.isca-archive.org/interspeech_2020/sadhu20_interspeech.pdf), Chang et al. (https://arxiv.org/abs/2104.01616), and Li et al. (https://arxiv.org/abs/2302.01496), which investigate incremental domain setups. In light of these prior studies, the contribution here feels mostly incremental in these areas.\", \"The dataset focuses exclusively on languages from India, which provides rich diversity, though it could be beneficial to include an analysis of language relationships and similarities to understand their potential impact on performance. Expanding the dataset with additional languages from public datasets like FLEURS or CommonVoice, especially those with minimal similarity, would further strengthen the study.\", \"Additionally, task order is determined randomly, yet exploring the effects of task ordering on performance or providing an analysis that considers language similarity, task order, and performance metrics could add valuable insights.\", \"Finally, while the study examines rehearsal-based and regularization-based CL methods, incorporating architecture-based approaches, such as Progressive Neural Networks (PNN), Piggyback (PB), and Learning to Prompt (L2P), could offer a more comprehensive view, as highlighted in prior literature.\"], \"questions\": \"Refer to weakness\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety', 'Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": \"Since this work introduces a new dataset, it\\u2019s essential to ensure that privacy and ethical practices are observed. Notably, Section 7 outlines the ethical guidelines and practices they follow.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the very detailed answer addressing my comments.\\nI may have missed this in the rebuttal but could you clarify what novel aspects Nirantar brings beyond IndicVoices, besides increased data volume? For example, are there any differences in data collection, organization, or intended use cases that distinguish Nirantar?\"}", "{\"title\": \"Response to reviewer YNAF\", \"comment\": \"We sincerely thank the reviewer for their useful feedback, please find our responses below.\\n \\n Items marked with ** indicate hyperlinks\\n\\n**W1 (partial): \\u2018existing ASR datasets like GigaSpeech 2 could be reorganized\\u2019**\\n\\nWhile we acknowledge the existence of certain datasets that can be repurposed for continual learning in LIL or DIL scenarios, we could not identify any dataset suitable for studying the language-incremental-domain-incremental (LIDIL) setting. Indeed, the lack of comprehensive settings that integrate both language and domain shifts, realistic episodic designs, and evolving benchmarks in existing datasets motivates our work. [Table** 3](https://anonmyous.objectstore.e2enetworks.net/nirantar-related-datasets.png) (also included in the appendix of the revised manuscript) provides a comparative overview of relevant datasets for LIL, DIL, and LIDIL scenarios. It should be noted that from this table that ours is the only dataset which contains audio samples collected from the field which are manually transcribed and have natural episodes. Our work advances CL for ASR in the following ways:\\n- **A novel LIDIL setting**: Nirantar facilitates the combined study of language and domain shifts through a language-incremental-domain-incremental (LIDIL) setting. This interplay has not been explored in prior works, adding a new dimension to continual learning research.\\n- **Data episodes designed from Real-World use-cases of CL**: Unlike prior works that rely on episodes crafted synthetically from existing datasets, the data episodes in Nirantar reflect real-world domain and linguistic variability.. This better represents the diversity encountered in practical applications.\\n- **Longer Episodic Sequences**: Prior works are limited to shorter episodic sequences (fewer than 4 episodes for DIL and fewer than 10 for TIL). In contrast, Nirantar enables experiments over 11 episodes across all three scenarios (LIL, DIL, and LIDIL), providing a more robust comparison of CL approaches for extended learning scenarios.\\n- **An Evolving Evaluation Benchmark**: Nirantar introduces an evolving benchmark by continuously adding 15-minute samples to the test set as new data is collected. Additionally, since the test data is sampled at the district level, it naturally supports evaluation in an episodic setting.\\n\\n**W1 (partial): \\u201cincluding initial statistics per language, domain, and episode\\u201d**\\n\\n[Table** 4](https://anonmyous.objectstore.e2enetworks.net/nirantar-dataset-detailed.png) (also included in the appendix of the revised manuscript) presents the detailed statistics of data for each language, domain as well as episode information across all three scenarios (LIL, DIL, and LIDIL).\\n\\n**Q1 (Partial): \\u201cquality assurance procedures\\u201d**\\n\\nThe data was collected and transcribed using a two-level process, following [1], involving a \\\"maker\\\" and a \\\"checker.\\\" Initially, the collected data underwent a QA verification check against 20 relevant criteria, including factors such as low volume, background noise, poor extempore quality, mispronunciations, irrelevant responses and so on. Additionally, the audio was cross-verified with metadata such as gender and age group. A detailed list of these parameters can be found here [1]. To ensure transcription quality, we follow a maker-checker-superchecker workflow. After the initial transcription by the maker, the data goes through two rounds of review (checker and a superchecker). The super-checkers, who are very experienced language experts, would either instruct the checkers to rework the transcription if it was inaccurate or directly make corrections themselves, ensuring quality. With data collection and transcription carefully conducted by humans and validated through multiple levels of checks, we are confident that the likelihood of significant errors is minimal. \\n\\n**Q1 (Partial): \\u201c...any pre-CL performance metrics\\u2026\\u201d**\\n\\nWe have added the pre-CL performance metrics, such as initial Word Error Rate (WER) using language specific trained models, in [Table** 4](https://anonmyous.objectstore.e2enetworks.net/nirantar-dataset-detailed.png), which is also included in the appendix of the revised manuscript\\n\\n\\n\\n[1] Javed, T., Nawale, J. A., George, E. I., Joshi, S., Bhogale, K. S., Mehendale, D., ... & Khapra, M. M. (2024). IndicVoices: Towards building an Inclusive Multilingual Speech Dataset for Indian Languages. arXiv preprint arXiv:2403.01926.\"}", "{\"title\": \"Response to reviewer YNAF (continue)\", \"comment\": \"**W2: \\u201cdiversity in different episodes\\u201d**\\n\\nThank you for sharing your feedback. We have added [Figures** 14 to 15](https://anonmyous.objectstore.e2enetworks.net/nirantar-domain-vocab-evolution.png) (also included in the appendix of the revised manuscript), which illustrate how the domains and vocabulary evolve over episodes. From the figures, we observe a steady increase in both vocabulary count and domain coverage as we move from episode 0 to episode 11. The vocabulary and domain sizes grow consistently across all scenarios (LIL, DIL, and LIDIL). Furthermore, the change is more gradual in the LIDIL scenario compared to DIL and LIL, making it a unique scenario to study.\"}", "{\"title\": \"Response to reviewer XYQq\", \"comment\": \"We sincerely thank the reviewer for their useful feedback, please find our responses below.\\n\\n**W1: \\u201cfluctuating trends\\u201d**\\n\\nThe fluctuations observed in Figures 3-5 are not solely due to data imbalance across languages but rather highlight the need for improved continual learning (CL) approaches to handle episodic data effectively. For instance, in the LIL scenario, the fluctuations in Forward Transfer (FT) and Intransigence reflect that certain data batches are inherently more challenging.\\nA specific example can be seen when transitioning from episode 8 to episode 9 in the LIL scenario, where Manipuri\\u2014a Tibeto-Burman language that is significantly different from the previously seen languages\\u2014is introduced. This results in a sharp decline in FT, as the model struggles to adapt to this distinct language while retaining information from prior tasks. We view this as a strength of our dataset, as it provides valuable insights into the challenges faced by CL methods and helps evaluate them more effectively.\\n\\n**Q1: \\u201cstrategies for data imbalance\\u201d**\\n\\nIn continual learning (CL) models, data imbalance across languages and domains can lead to biased learning, where the model favours more frequently encountered classes, thus diminishing performance on underrepresented categories. In our study, we use temperature-based sampling [1,2] which adjusts the probability of selecting samples based on their rarity in the episode. This is a widely used method when training multilingual models [3] where data imbalance across languages is unavoidable. In the context of data imbalance, it can help ensure that minority class examples are more likely to be chosen for training. By increasing the \\\"temperature\\\" in the sampling process, the model reduces bias toward more frequently occurring data points and encourages more balanced exposure to various classes\\u200b. This is particularly important when training with episodes involving multiple languages (DIL and LIDIL setting) and does not apply for LIL where a single language is introduced at each time. In our experiments we set alpha to 1.5\\nIn addition to this, data augmentation techniques, such as synthetic data generation [4], are widely used to address class imbalance. These methods artificially increase the representation of underrepresented classes by generating additional data, thereby balancing their occurrence with other classes in the dataset. We have not explored the use of synthetic data as it is beyond the scope of our current work.\\n\\n[1] Aharoni, R., Johnson, M., & Firat, O. (2019, June). Massively Multilingual Neural Machine Translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (pp. 3874-3884).\\n\\n[2] Wang, X., Tsvetkov, Y., & Neubig, G. (2020, July). Balancing Training for Multilingual Neural Machine Translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 8526-8537).\\n\\n[3] Conneau, A., & Lample, G. (2019). Cross-lingual language model pretraining. Advances in neural information processing systems, 32.\\n\\n[4] Shorten, C., Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J Big Data 6, 60 (2019). https://doi.org/10.1186/s40537-019-0197-0\\n\\n\\n[4] Shorten, Connor and Taghi M. Khoshgoftaar. \\u201cA survey on Image Data Augmentation for Deep Learning.\\u201d Journal of Big Data 6 (2019): 1-48.\"}", "{\"summary\": \"The paper introduces Nirantar, a new benchmark for continual learning (CL) in automatic speech recognition (ASR) that deals with real-world challenges of adding new languages and domains over time. Nirantar is based on a large dataset of 3,250 hours of speech across 22 Indian languages and 208 districts (domains), collected in phases that mimic real-world, incremental data collection.\", \"the_main_contributions_of_the_paper_are\": \"1) New Data: The paper relies on existing dataset, but contributes an additional 1720 hours of newly collected speech data. \\n\\n2) Real Scenarios: the benchmark provides realistic settings for continual learning, including Language-Incremental Learning (LIL), Domain-Incremental Learning (DIL), and a novel Language-Incremental Domain-Incremental Learning (LIDIL), explored in ASR for the first time.\\n\\n3) Comprehensive Benchmarking: The paper evaluates several popular CL methods (e.g., Experience Replay, Elastic Weight Consolidation) across all scenarios, offering a thorough comparison of CL approaches.\\n\\n4) Open-Source Contributions: All resources from this work will be made publicly available to encourage further research in multilingual and multi-domain continual learning for ASR.\\n\\nIn summary, Nirantar provides a valuable benchmark that reflects real-world data collection and introduces new challenges for continual learning in speech recognition.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"**Novel Contribution:** The work is original, not only contributing newly collected data, but including a novel Continual Scenario for ASR: Language-Incremental Domain-Incremental Learning (LIDIL). This scenario provides a more realistic view of real-world multilingual and multi-domain environments compared to many CL datasets that rely on synthetic or structured data, making its impact relevant beyond the specific languages covered.\"], \"comprehensive_evaluation\": [\"The evaluation is extensive, covering popular continual learning methods like Experience Replay, Elastic Weight Consolidation, and Memory-Aware Synapse, along with meaningful metrics. The benchmark also evaluates CL methods across three distinct scenarios (LIL, DIL, and LIDIL), offering a detailed and multi-faceted analysis that allows for a nuanced understanding of model behavior in varying incremental learning conditions.\", \"**Clarity and Structure:** The paper is structured logically, with clear definitions and explanations of the continual learning scenarios and metrics. The description of data collection, experimental setups, and training procedures is transparent, with steps and design choices explained in detail. This clarity makes it easier for researchers to replicate or build upon the work.\", \"**High Impact for Low-Resource ASR:** The benchmark is valuable for advancing low-resource ASR research, especially for underrepresented languages like Indian languages. By focusing on multilingual, multi-domain continual learning, the benchmark tackles unique challenges in low-resource ASR.\", \"**Open Access:** the open release of the dataset and resources under a permissive license further promotes research and collaboration in the field, supporting wider accessibility of multilingual ASR technology.\"], \"weaknesses\": \"- **Limited Baselines:** While the paper evaluates CL methods from replay-based and regularization-based approaches, it excludes architecture-based methods arguing they are impractical for real-world settings as they add parameters for each new language and domain, leading to excessive complexity as episodes increase (lines 328-332). However, this overlooks recent advances in lightweight architecture-based methods like language-specific adapters, which have improved ASR performance even in domains harder than the training data [1]. Including such methods could provide valuable baselines and insights into scalable alternatives that balance parameter efficiency and adaptability.\\n\\n- **Discussions on Cross-Lingual Transfer:** The dataset is highly diverse, covering 22 Indian languages and 208 districts, but this diversity is underexplored in the analysis. A more granular breakdown of metrics would allow a clearer view of knowledge transferability across closely related languages and domains. For instance, including subgroup analyses by geographic distance, language family, or dialectal variations could highlight the impact of linguistic and regional proximity on model performance. Such details would enhance understanding of the effectiveness of CL techniques in low-resource settings. For example, on line 465, the authors attribute a performance decline to the introduction of a Tibeto-Burman language into the dataset predominantly composed of Indo-Aryan and Dravidian languages. Expanding on this, with metrics grouped by language family or linguistic characteristics, would provide more actionable insights into cross-language transfer. Additionally, including dataset attributes like vocabulary overlap, phonetic variations, or language family classification would better showcase the dataset\\u2019s uniqueness and clarify the transferability of findings to other low-resource multilingual datasets.\\n\\nReferences\\n\\n[1] Ferraz, T. P., Boito, M. Z., Brun, C., & Nikoulina, V. (2024). Multilingual Distilwhisper: Efficient Distillation of Multi-Task Speech Models Via Language-Specific Experts. ICASSP 2024.\", \"questions\": \"1) Could you provide details on how human subjects were compensated and whether explicit consent was obtained for both research and potential industrial use of their voices? Including protocols and consent guidelines in the appendix would be helpful.\\n\\n2) I recommend providing sample data in the appendix to illustrate the dataset.\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety', 'Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": [\"The authors provide information and assert that these issues have been address; however, I am not entirely sure this is sufficient for acceptance. Therefore, I recommend an ethical review that considers these points:\", \"**Consent and Data Usage:** Details on whether participants provided informed consent specifically for both research and potential industrial applications of the data, and how data anonymization was ensured to protect privacy.\", \"**Fair Compensation:** Discussion on whether compensation for data contributors and annotators aligns with fair local standards to prevent exploitation.\"], \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Nirantar, a dataset designed to facilitate continual learning (CL) through real-world, large-scale speech data from 22 Indian languages. Data is gathered incrementally across diverse languages and domains, making it distinct from typical CL datasets that rely on simulated episodes.\", \"nirantar_support_3_key_cl_scenarios\": \"Language-Incremental (LIL), Domain-Incremental (DIL), and the novel Language-Incremental Domain-Incremental Learning (LIDIL), which has not previously been studied. The authors\\u2019 evaluation of several existing CL algorithms in these scenarios reveals that algorithmic behavior varies significantly across them, suggesting the need for dedicated analyses tailored to each scenario.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The dataset\\u2019s design closely mirrors real-world CL requirements, providing a valuable resource for research on multilingual, multi-domain, and incremental learning.\\n\\nThe inclusion of both widely used CL scenarios (LIL, DIL) and the novel LIDIL scenario represents a significant contribution, opening new avenues for research.\\n\\nEvaluations of CL approaches provide useful initial insights, underscoring the complexity and variability of performance across the dataset\\u2019s scenarios.\", \"weaknesses\": \"Figures 3-5 in the report show fluctuating trends, which might indicate an imbalance in the amount of data available for each language. This imbalance could impact the model's performance on specific low-resource languages, potentially limiting its effectiveness in some areas.\\n\\n\\nWhile the evaluations provide a starting point, a deeper exploration of algorithmic adaptations or optimizations specifically for LIDIL could enhance the study\\u2019s practical impact.\", \"questions\": \"Given the natural variations in data size per language and domain, what strategies are recommended to handle data imbalance in the CL models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer 6Bxs (continue)\", \"comment\": \"**W3: \\u201c... task order is determined randomly \\u2026\\u201d**\\n\\nWe acknowledge that the task order was determined randomly in the paper. To resolve this, we performed the study with two more random orderings to highlight that our observations and analysis stay consistent with other random orderings as well. The following lines list the original task order and two more permutations of it for the LIDIL scenario.\", \"random_order_1\": \"0\\u21921\\u21922\\u21923\\u21924\\u21925\\u21926\\u21927\\u21928\\u21929\\u219210\\u219211\", \"random_order_2\": \"0\\u219211\\u21921\\u21922\\u219210\\u21928\\u21925\\u21929\\u21923\\u21924\\u21926\\u21927\", \"random_order_3\": [\"0\\u21928\\u21926\\u21927\\u21929\\u21924\\u21925\\u21921\\u21922\\u21923\\u219211\\u219210\", \"[Figures** 7 to 9](https://anonmyous.objectstore.e2enetworks.net/nirantar-permute-lidil.png) (also included in the appendix of the revised manuscript) present the results for the original episodic sequence (Random Order 1) and two additional randomized sequences (Random Order 2 and Random Order 3) in the LIDIL scenario.\", \"**Consistency Across Permutations**: The AMER and BT performance of different continual learning methods (Inc. FT, ER, EWC, MAS) remains similar regardless of the episode order.\", \"**Stable Performance Rankings**: The relative rankings of methods (in terms of Forward Transfer, Backward Transfer, and Intransigence) are similar across all permutations. This shows that the order of episodes doesn\\u2019t drastically affect how methods compare in terms of learning new tasks and preserving previous knowledge.\", \"**Impact of Episode Order on Intransigence**: Intransigence fluctuates with episode order, suggesting that certain episode sequences are more challenging to train. For instance, episode 2 in the original order appears as episode 3 and episode 8 in the permuted orders (1 and 2), which corresponds to noticeable peaks in the intransigence measure.\", \"**W4: \\u201c... architecture-based approaches\\u2026\\u201d**\", \"As discussed in Section 1 of the paper, in continual learning (CL) settings involving a large number of languages and domains, architecture-based approaches can lead to model bloat and unnecessary complexity. However, based on the reviewer\\u2019s suggestion, we explored this approach in the LIL scenario, where we added up to 11 adapters (one for every new language). These adapters were integrated into each Conformer block of the Conformer-L model, with a bottleneck dimension of 64, resulting in an additional 1 million parameters per language. The results of this experiment are presented in [Figure** 10](https://anonmyous.objectstore.e2enetworks.net/nirantar-lil-adapters.png) (also included in the appendix of the revised manuscript):\", \"The Adapters method outperforms all other CL approaches, except for ER, in terms of AMER and Backward Transfer. This is primarily because each new episode adds an adapter layer, which prevents forgetting during the training process, as each episode trains a different adapter without modifying the base model. Interestingly, the difference between Joint FT and Adapters can be attributed to the number of parameters involved. We believe that increasing the adapter's bottleneck dimension to expose more trainable parameters could further reduce the gap.\", \"Forward transfer is worse for adapters because they are specifically tuned for individual languages in each episode, without facilitating knowledge transfer to future episodes. This limits the ability to leverage shared knowledge across languages and domains, which could benefit subsequent tasks.\", \"Adapters exhibit the highest Intransigence Measure, as the entire backbone stays frozen, and only the language-specific adapters are updated during each episode. This introduces rigidity, but it also helps mitigate catastrophic forgetting. That said, during this experiment, the number of parameters increased by 11 million (1 million per episode). If extended to a domain-incremental or language-incremental-domain-incremental setting, the parameter count could reach an order of magnitude of O(100), making it impractical for real-world applications.\", \"We will add the above results to the final version of the paper and highlight that ER based approaches still remain a more feasible alternative as it performs better on most metrics and is applicable in all the three settings (LIL, DIL, LIDIL). Thank you again for suggesting this experiment as we believe it has helped make our analysis more comprehensive.\"]}", "{\"title\": \"Response to reviewer oRB3\", \"comment\": \"We sincerely thank the reviewer for their useful feedback, please find our responses below.\\n \\n Items marked with ** indicate hyperlinks\\n\\n**W1 & W2: \\u201cdesign of datasets, volume of data\\u201d**\\n\\nWhile we acknowledge the existence of certain datasets that can be repurposed for continual learning in LIL or DIL scenarios, we could not identify any dataset suitable for studying the language-incremental-domain-incremental (LIDIL) setting. Indeed, the lack of comprehensive settings that integrate both language and domain shifts, realistic episodic designs, and evolving benchmarks in existing datasets motivates our work. [Table** 3](https://anonmyous.objectstore.e2enetworks.net/nirantar-related-datasets.png) (also included in the appendix of the revised manuscript) provides a comparative overview of relevant datasets for LIL, DIL, and LIDIL scenarios. It should be noted that from this table that ours is the only dataset which contains audio samples collected from the field which are manually transcribed and have natural episodes. Our work advances CL for ASR in the following ways:\\n- **A novel LIDIL setting**: Nirantar facilitates the combined study of language and domain shifts through a language-incremental-domain-incremental (LIDIL) setting. This interplay has not been explored in prior works, adding a new dimension to continual learning research.\\n- **Data episodes designed from Real-World use-cases of CL**: Unlike prior works that rely on episodes crafted synthetically from existing datasets, the data episodes in Nirantar reflect real-world domain and linguistic variability.. This better represents the diversity encountered in practical applications.\\n- **Longer Episodic Sequences**: Prior works are limited to shorter episodic sequences (fewer than 4 episodes for DIL and fewer than 10 for TIL). In contrast, Nirantar enables experiments over 11 episodes across all three scenarios (LIL, DIL, and LIDIL), providing a more robust comparison of CL approaches for extended learning scenarios.\\n- **An Evolving Evaluation Benchmark**: Nirantar introduces an evolving benchmark by continuously adding 15-minute samples to the test set as new data is collected. Additionally, since the test data is sampled at the district level, it naturally supports evaluation in an episodic setting.\\n\\n**W3 & Q1, Q2, Q3: \\u201cEmpirical justification\\u201d**\\n\\n- The rationale behind using continual learning (CL) is to adapt a good base model to perform well on fresh data, which may come in the form of new languages, new domains, or both. Therefore, it is critical to start with a robust base model to test CL approaches effectively. For this reason, the largest number of hours was chosen, as it ensures sufficient data for establishing a solid foundation in the initial model. In our experiments, we make the following choices across scenarios:\", \"lid\": \"Start with half the languages as the base model.\", \"dil\": \"Start with half the districts for each language as the base model.\", \"lidil\": [\"Start with half the languages with half the districts as the base model.\", \"The remaining of the episodes corresponds to the remaining number of languages (11) in our setup. However, this does not limit the episodes for DIL and LIDIL to a maximum of 11. Our evaluation framework is designed to evolve over time, enabling the addition of new episodes for both DIL and LIDIL scenarios.\", \"WER and CER can exceed 100%, leading to inconsistencies when comparing different approaches, particularly in high-error scenarios. To ensure a more standardized and interpretable evaluation, we use Match Error Rate (MER) as the primary metric. MER is bounded within the range [0, 1], where 0 represents no error between the ground truth and the model hypothesis, and 1 represents a completely incorrect hypothesis. For completeness, we will include WER and CER plots in the appendix for all our results.\", \"We apologize for the citation style, we have updated the manuscript with the revised citation style.\"]}", "{\"title\": \"Rebuttal Summary\", \"comment\": [\"We sincerely appreciate the reviewers for their insightful and constructive feedback. Their suggestions, including exploring different task orderings and integrating adapters, have been instrumental in enhancing the quality and depth of our manuscript. Below, we present a summary of the key points addressed in our rebuttal, as highlighted by the reviewers:\", \"**(6Bxs, YNAF)**: Highlighted limitations of existing datasets (e.g., FLEURS, Common Voice, GigaSpeech 2) for studying DIL and LIDIL scenarios as they lack domain information. Presented a comparative analysis, showing Nirantar as the only dataset with real-world, manually transcribed, and episodic data designed for Continual learning (CL). [[Table** 3]](https://anonmyous.objectstore.e2enetworks.net/nirantar-related-datasets.png)\", \"**(6Bxs)**: Addressed randomness in task order by including results for two additional permutations for the LIDIL scenario, showing consistency across different orderings. [[Figures** 7 to 9]](https://anonmyous.objectstore.e2enetworks.net/nirantar-permute-lidil.png)\", \"**(6Bxs, JqUP)**: Conducted experiments with adapters in the LIL setting, showing their effectiveness in preventing catastrophic forgetting during training. However, we still caution against the possibility of parameter bloat in scenarios when a large number of adapters are added (e.g., in DIL and LIDIL scenarios) and the limited scalability of adapters compared to ER-based approaches. [[Figure** 10]](https://anonmyous.objectstore.e2enetworks.net/nirantar-lil-adapters.png)\", \"**(JqUP)**: Explored cross-lingual transfer in LIDIL scenario for Indo-Aryan and Dravidian languages, revealing stable and better learning for Dravidian languages. [[Figures** 11 to 13]](https://anonmyous.objectstore.e2enetworks.net/nirantar-crosslingual-lidil.png)\", \"**(JqUP)**: Clarified detailed participant instructions, consent and guidelines, which shall be duly added to the final version of the manuscript.\", \"**(XYQq)**: Addressed the comment on \\\"fluctuating trends\\\", explaining that they reflect challenges of continual learning (CL) methods, such as adapting to significantly different new languages (e.g., Manipuri in LIL scenario) rather than mere data imbalance.\", \"**(XYQq)**: Discussed strategies for addressing data imbalance.\", \"**(6Bxs, oRB3)**: Highlighted Nirantar's unique contributions, including the novel LIDIL setting, real-world episodic designs, longer episodic sequences, and evolving evaluation benchmark that distinguish it from existing datasets. Emphasized that Nirantar's focus on CL, novel IL scenarios, and tailored evaluation frameworks mark it as a distinct advancement beyond IndicVoices.\", \"**(oRB3)**: Justified empirical choices in LIL, DIL, and LIDIL scenarios and clarified the use of Match Error Rate (MER) over WER/CER for consistent evaluation. Additional plots for WER and CER will be included in the final manuscript.\", \"**(YNAF)**: Provided statistics for language, domain, and episodes [[Table** 4]](https://anonmyous.objectstore.e2enetworks.net/nirantar-dataset-detailed.png) and demonstrated diversity in episodes through evolving vocabulary and domains. [[Figures** 14 to 15]](https://anonmyous.objectstore.e2enetworks.net/nirantar-domain-vocab-evolution.png)\", \"**(YNAF)**: Added details of multi-level QA processes for data collection and transcription to ensure minimal errors and included pre-CL performance metrics using language-specific models. [[Table** 4]](https://anonmyous.objectstore.e2enetworks.net/nirantar-dataset-detailed.png)\"]}", "{\"comment\": \"Thank you for the additional explanation about the contribution beyond increasing the volume of IndicVoices. I will raise my score accordingly.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": [\"The paper introduces Nirantar, a large-scale dataset and benchmark for continual learning (CL) in automatic speech recognition (ASR), focusing on real-world challenges of incremental data collection. Nirantar comprises 3,250 hours of human-transcribed speech spanning 22 Indian languages and 208 districts (domains), with 1,720 hours newly contributed in this work. The work claimed following contribution: 1) a dataset collected in a realist scenario mimicking real-world multilingual and multidoma in CL challenges, where audios were collected incrementally across languages and locations. The dataset support various CL scenarios including Language-Incremental Learning (LIL), Domain-Incremental Learning (DIL), and Language-Incremental Domain-Incremental Learning (LIDIL); 2) initial benchmarking the dataset on several existing CL methods (e.g., Experience Replay, Elastic Weight Consolidation, Memory-Aware Synapse) across these scenarios; authors also promised to make the datasets along with evaluation benchmarks publicly available\", \"Strength of this paper\", \"Built a dataset with diversity in domains/languages and reflecting real-world multilingual and multi-domain continual learning (CL) challenges, supporting incremental learning and adaptation across diverse languages and domains The dataset allows exploration of a new scenario, Language-Incremental Domain-Incremental Learning (LIDIL), beyond traditional Language-Incremental (LIL) and Domain-Incremental (DIL) settings. LIDIL offers a more realistic view of multilingual, multi-domain challenges and new research problems.\", \"Authors also conduct benchmarking and evaluation to evaluates several established CL methods, including Experience Replay, Elastic Weight Consolidation, and Memory-Aware Synapse across various scenarios (LIL, DIL, and LIDIL), understand the characteristics of the datasets, and provide analysis and insight for future research\", \"The dataset and resources will be released under CC-BY-4.0 license, encouraging collaboration and broader accessibility in multilingual ASR research. The dataset will be a valuable resource for the speech research community, setting a new benchmark for multilingual, multidomain, and incremental learning in automatic speech recognition (ASR), supplementing existing multilingual ASR corpora , and advancing research in various problems such as low-resource ASR\", \"Weakness of this paper\", \"Several reviewers raised few concerns/limitations of this paper. By addressing these limitations, the paper could strengthen its experiment and expand impact.\", \"Scope of the dataset and generalizability: the dataset exclusively focuses on Indian languages, which provides rich diversity but limits its generalizability. Expanding the dataset with additional languages from resources like FLEURS or CommonVoice, especially those with minimal similarity to Indian languages, would improve its applicability and relevance. The paper has limited exploration in several important aspects, e.g., cross-lingual transfer or language relationships to have subgroup analyses based on geographic distance, language family, dialectal variations, vocabulary overlap, or phonetic variations, and investigating algorithmic adaptations or architecture-based approaches like Progressive Neural Networks (PNN), Piggyback (PB), and Learning to Prompt (L2P).\", \"Experiment settings could be improved: the task order is determined randomly, but analyzing the effects of task order and incorporating language similarity into task sequencing could add valuable insights; the dataset\\u2019s imbalance in data availability across languages may skew results and it's worthwhile to have some discussion about the effect.\", \"Concerns about novelty: the dataset\\u2019s design does not convincingly show the added value of Nirantar and its uniqueness for testing CL methods, as similar schemes could be applied to other datasets, such as phased releases of CommonVoice. Many of the works were also built upon existing efforts such as IndicVoices, Language-Incremental (LIL) and Domain-Incremental (DIL) setting, with some incremental changes.\"], \"additional_comments_on_reviewer_discussion\": \"In addition to above weaknesses, reviewers also raised some other weaknesses and suggested improvements (e.g., explanations and justifications for certain experimental choices, improving writing, additional statistics about language, domain, and episode , and pre-CL performance metrics) during rebuttal. Some of the weakness have been improved / somewhat addressed during rebuttal session (e.g., further explanation and justification on the questions raised by reviewers, more experiment results added). Although some review rating was raised, the rating averaged over all reviewers are still at borderline. I think the session is too short and some weaknesses are hard to address in such a short period of time. Also there is a general concern about the significance of novelty. Plus the main contribution of this paper is on the creation of dataset; which I believe better venue than ICLR exists for such efforts. Given the high bar of ICLR, I think the paper is still of limited interests to the audience , and thus I recommend to reject the paper, and the authors to re-work on these weakness and re-submitting to future conferences.\"}", "{\"summary\": \"This paper introduces Nirantar, a dataset designed for studying continual learning methods.\\nThe dataset comprises batches of data for various Indian languages and Indian districts.\\nVarious continual learning methods are analyzed on the language incremental, domain incremental and language incremental domain incremental setups.\", \"soundness\": \"2\", \"presentation\": \"the citation formatting is unusual and can be double checked/fixed.\", \"contribution\": \"2\", \"strengths\": [\"Continual learning is an important (as it relates to data efficient learning) and unsolved problem.\", \"The dataset seems valuable for studying continual learning and also beyond that as it supplements existing multilingual ASR corpora with mid/high/low resource data from Indian languages.\", \"Experimentation is reasonably thorough and tests different CL methods in various setups.\"], \"weaknesses\": [\"It's unclear whether the design of the dataset makes it unique to test CL methods. For example, a similar scheme could be adopted by taking the various releases over time from Common Voice (the domains would not be obtained in a straightforward manner though). The paper does not really demonstrate that the dataset provides value over taking an existing dataset and slicing it into episodes.\", \"Related to the previous point, if the methodology to collect the data is the same as for IndicVoices, the contribution for this paper becomes less clear (other than adding more volume).\", \"A more minor weakness: the writing could be improved, for example by justifying all empirical choices (see clarification questions)\"], \"questions\": [\"269: \\u201cwe select the 11 languages having the largest number of hours in Table 1\\u201d. Why is the largest number of hours chosen?\", \"309: \\u201cWe create timelines of length \\u03c4 = 11\\u201d. Why was 11 chosen?\", \"377: \\u201cAverage MER: Match Error Rate\\u201d. Why not use WER or CER?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
Exkm5OReTY
MaskTab: Masked Tabular Data Modeling for Learning with Missing Features
[ "Yudong Chen", "Zihua Xiong", "Shuai Fang", "Yuke Zhu", "Bo Zheng", "Sheng Guo" ]
Tabular machine learning has garnered increasing attention due to its practical value. Unlike the complete and standardized data often assumed in academia, tabular data primarily originates from industrial contexts and usually faces the issue of incomplete data samples, i.e., some features of a sample may be unpredictably missing. In this work, we introduce MaskTab, a masked tabular data modeling framework designed to facilitate model learning despite missing features. Instead of pursuing to accurately restore missing features like existing imputation methods, we jointly approach missing feature modeling and downstream tasks (e.g., classification) with a unified objective. Concretely, we propose to randomly drop out some solid features during training, equipped with a missing-related masked attention mechanism, to help the model rely more on trustworthy features when making decisions. Experiments on the very recent industry-grade benchmark, TabReD, suggest that our method surpasses the second DNN-based competitor by a clear margin, demonstrating its effectiveness and robustness in real-world scenarios. We will release the code and the model to facilitate reproduction.
[ "tabular data prediction", "masked learning", "missing features" ]
https://openreview.net/pdf?id=Exkm5OReTY
https://openreview.net/forum?id=Exkm5OReTY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ywrhLdpJ7J", "JfP5pSaZ9X", "GtanBW86N9", "77F8vBcYxK", "5MLPh3Np33" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730224903678, 1730640638926, 1730320270562, 1730482384734, 1733724371199 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2125/Reviewer_VkKS" ], [ "ICLR.cc/2025/Conference/Submission2125/Reviewer_Ygbd" ], [ "ICLR.cc/2025/Conference/Submission2125/Reviewer_bUvz" ], [ "ICLR.cc/2025/Conference/Submission2125/Reviewer_3iL8" ], [ "ICLR.cc/2025/Conference/Submission2125/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The authors propose an empirical method to impute missing values in tabular data. In particular, they propose to learn an embedding for each missing value -- this embedding is shared across all missing values within a row. To train this embedding, they introduce a reconstruction loss: they simulate missingness and add a loss to predict their values. Finally, the authors also propose a missingness-aware attention block, which masks missing features from others.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper is interesting because it deals with a frequently overlooked aspect of the tabular data in deep learning, namely missingness. Overall, the approach is simple and could benefit many of the existing Transformer-based tabular models. The results show strong empirical performance against other state-of-the-art methods, including GDBT models such as CatBoost and XGBoost, on an industry-grade benchmark. The authors offer extensive ablation study to justify their architectural choices and offer insights into the model tuning.\", \"weaknesses\": \"This is purely an experimental paper. As such, a lot of design choices are clearly dictated by performance considerations and are not intuitive or otherwise theoretically justified. I left some questions below, addressing which I think could make the paper stronger.\", \"questions\": \"1. Is my understanding correct that all of the missing / simulated missing features share the same (learnable) embedding? Are you using any other information about the missing features such as their column names? From my read of the paper, if I have missing features, $\\\\mathbf{x}_{m}$, and masked features, $\\\\mathbf{x}_s$, such that $\\\\[\\\\mathbf{x}_m,\\\\mathbf{x}_s\\\\]\\\\in\\\\mathbb{R}^n$, then your method would embed them as $\\\\[\\\\mathbf{x}_m,\\\\mathbf{x}_s\\\\] \\\\to \\\\[\\\\mathbf{e}^T, \\\\mathbf{e}^T, ..., \\\\mathbf{e}^{T}\\\\]\\\\in\\\\mathbb{R}^{d\\\\times n}$, where $\\\\mathbf{e}$ is your learnable embedding. It would help a lot if more details regarding this were included in Section 3.2 and 3.3.\\n2. I do not understand how your `num_mask_head` and `cat_mask_head` work from reading Section 3.5 and Figure 1. Since the simulated dropped features could be different between rows within a single batch, how do these heads predict the missing values? I am supposing these heads are reconstructing the whole row, but the loss is only computed for the masked missing values. It would be nice if further clarification is provided in Section 3.5 regarding this. \\n3. You introduce the missing-related attention mechanism, but empirically it only helps when added in the last layer. Is there any intuition here? Supposedly, if no masking has been used in the previous layers, the information should have been already shared between all the features. So, in a sense, it should not matter if you mask some other positions or mask more tokens. \\n4. In Section 4.1, how are missing values imputed for the other methods? Some of the models can handle missingness out-of-the-box; however, I suppose for all deep learning methods, some pre-processing had to be applied. \\n5. In Section 4.3 \\\"Performance under Different Missing Rates\\\", what do you mean by MaskTab without enhancements. If you don't use the missing feature embeddings, how are those missing features handled then? \\n6. The related works could use some improvement. For example, the recently released, LAMDA-TALENT [1] benchmark includes many more methods. In particular, related to your method, unsupervised masking via contrastive learning for robustifying DNN has been previously explored in SwitchTab [2]. \\n\\n[1] https://github.com/qile2000/LAMDA-TALENT\\n\\n[2] Wu, Jing, et al. \\\"Switchtab: Switched autoencoders are effective tabular learners.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 14. 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents **MaskTab**, a framework for handling missing features in tabular data by incorporating simulated missing data directly into downstream tasks. The authors claim MaskTab outperforms other DNN-based methods on the TabReD benchmark by employing masked attention to prioritise trustworthy features.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"Tackles the prevalent problem of missing features in tabular data.\", \"Shows slight improvement on TabReD benchmark datasets, though the results are somewhat questionable.\"], \"weaknesses\": \"I find the novelty of the proposed method questionable. Masking features during both pre-training and training for downstream tasks is already common practice, often implemented by introducing various forms of noise (e.g., SubTab [1]) or explicitly masking to improve representation learning (e.g., VIME [2]). Additional concerns include:\\n\\n- **Lack of Confidence Intervals**: Confidence intervals (e.g., error bars) are not reported, which exaggerates the model's performance claims. For instance, in Table-2, MLP and MaskTab appear to perform similarly for the Ecom Offers dataset, and TabNet\\u2019s performance is comparable to MaskTab for Cooking Time dataset when considering a 95% confidence interval (though this isn't shown).\\n- **Incomplete Literature Review**: The paper provides a limited review of tabular data methods and should reference studies that mask or corrupt input features, such as SubTab [1] and VIME [2].\\n- **Limited Value of \\\"Missing-Related Masked Attention\\\"**: The ablation study shows minimal benefit from the proposed masked attention, likely because attention models inherently prioritize informative features. Explicitly adjusting attention weights based on missing features may not be necessary.\\n- **Baseline Comparison**: A baseline using existing models with imputed data should be included to clarify the performance contrast and demonstrate any true advantage of the proposed method.\\n\\n**References:**\\n\\n[1] SubTab: Subsetting Features of Tabular Data for Self-Supervised Representation Learning \\\\\\n[2] Vime: Extending the success of self-and semi-supervised learning to tabular domain.\", \"questions\": \"No questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose using a new trend of masked modeling in a tabular setting and compare the results with several simple baselines, such as XGBoost, etc. However, it is not entirely clear what the main challenges of tabular data for such training are that the authors are trying to address. I expected to see some sort of novelty, at least in terms of the masking strategy.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The foundation model for tabular data is important.\", \"weaknesses\": [\"The novelty is somewhat limited, as there are not many specific improvements compared to existing methods in the new tabular setting.\", \"The experiments are also somewhat limited. For example, baselines such as SSL methods for tabular data, including SubTab, VIME, SCARF, PTaRL, etc are not included.\", \"The experiments lack standard deviation for the reported values.\", \"How does the performance compare to foundation models or LLMs?\", \"Overall, I believe the paper's aims, claims, and experiments do not seem competitive with the current ICLR standards.\"], \"questions\": [\"Please consider adding more baselines, especially those that can handle missing data, including SubTab, etc.\", \"Clearly outline the main challenges that make this setting different from other data modalities and how you addressed those challenges. Missing data also exists in language and biological data.\", \"The implementation details are not clear.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a new approach to deal with missing values when the final goal is creating a supervised model. Their approach is based on masking inputs in the training set when training their classifiers.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1) Masking is an interesting approach to modeling missing values;\\n2) The paper is well written.\", \"weaknesses\": \"Even though the authors approach this problem from an interesting angle (i.e., masking), it suffers from major weaknesses:\\n1) Their method lacks statistical intuition and/or theory. For example, it is not clear when/why their approach should work. \\n2) The authors make no distinction on the types of missing patterns they are tackling: is it MAR, MCAR, etc?\\n3) The empirical results do not seem promising when compared to baselines. \\n\\nIn summary, I do not see an expressive contribution either in terms of theory or empirical results.\", \"questions\": \"Assuming training and testing have the same missing patterns, why is masking a sensible approach? In my view, masking can be only useful in cases where there is a distribution shift when you are trying to emulate test missing patterns. The authors comment on distribution shifts but I do not think this is explored in depth as it should be (perhaps using a more formal statistical framework).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
ExUC9dQJhQ
Certified Robustness to Data Poisoning in Gradient-Based Training
[ "Philip Sosnin", "Mark Niklas Mueller", "Maximilian Baader", "Calvin Tsay", "Matthew Robert Wicker" ]
Modern machine learning pipelines leverage large amounts of public data, making it infeasible to guarantee data quality and leaving models open to poisoning and backdoor attacks. Provably bounding model behavior under such attacks remains an open problem. In this work, we address this challenge by developing the first framework providing provable guarantees on the behavior of models trained with potentially manipulated data without modifying the model or learning algorithm. In particular, our framework certifies robustness against untargeted and targeted poisoning, as well as backdoor attacks, for bounded and unbounded manipulations of the training inputs and labels. Our method leverages convex relaxations to over-approximate the set of all possible parameter updates for a given poisoning threat model, allowing us to bound the set of all reachable parameters for any gradient-based learning algorithm. Given this set of parameters, we provide bounds on worst-case behavior, including model performance and backdoor success rate. We demonstrate our approach on multiple real-world datasets from applications including energy consumption, medical imaging, and autonomous driving.
[ "Data Poisoning", "Certified Robustness", "Neural Networks" ]
Reject
https://openreview.net/pdf?id=ExUC9dQJhQ
https://openreview.net/forum?id=ExUC9dQJhQ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tvQLQsJpIm", "mCdp53bcuO", "lAz2Q9kb7P", "fmqKAXfHmC", "fOhHQFEk5L", "dltQaDBPAL", "WR1rHkG2eD", "WMldtFjMvX", "UHC23vV8Qm", "SDW1o6t54U", "H1YObF7eW3", "CItix2R4zj", "3LC4GCETRb", "0m0T0wJ4q1" ], "note_type": [ "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_review" ], "note_created": [ 1732716066871, 1732529476396, 1734031115058, 1731959928484, 1731958571411, 1732717798891, 1730714920324, 1731959516906, 1730390372914, 1732225577083, 1731958516195, 1737524115305, 1732020523303, 1730080006230 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11274/Reviewer_odiZ" ], [ "ICLR.cc/2025/Conference/Submission11274/Authors" ], [ "ICLR.cc/2025/Conference/Submission11274/Area_Chair_CCRe" ], [ "ICLR.cc/2025/Conference/Submission11274/Authors" ], [ "ICLR.cc/2025/Conference/Submission11274/Authors" ], [ "ICLR.cc/2025/Conference/Submission11274/Authors" ], [ "ICLR.cc/2025/Conference/Submission11274/Reviewer_odiZ" ], [ "ICLR.cc/2025/Conference/Submission11274/Authors" ], [ "ICLR.cc/2025/Conference/Submission11274/Reviewer_vG7Q" ], [ "ICLR.cc/2025/Conference/Submission11274/Reviewer_yEud" ], [ "ICLR.cc/2025/Conference/Submission11274/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11274/Reviewer_vG7Q" ], [ "ICLR.cc/2025/Conference/Submission11274/Reviewer_yEud" ] ], "structured_content_str": [ "{\"comment\": \"Thanks to the author for their answer. I keep my concern about this approach, which is similar to the concern on tightness from reviewer yEud. Specifically, the tightness of practical evaluation of such bound remains concerning (in fact impractical) as the reachable parameter interval will explode.\\n\\nFor the fundamental comparison between a formal (non-probabilistic) approach as in this paper, and the statistical approach based on approximations as in influence function based methods, it might be needed to remind the authors that approximations can sometimes be more precise than exact (or formal) computations ! and this case is a perfect illustration for this, since the reachable set can be very broad and practically useless as an information, when trying to narrow down the poisoning capabilities of the attacker. I will unfortunately keep my score and recommend that the authors thoroughly compare their approach to the one with influence functions.\"}", "{\"comment\": \"We greatly appreciate the reviewer's time and continued consideration of our rebuttal. We hope the below clears up their concerns and are happy to provide any further clarification if needed.\\n\\nThe reviewer is correct in their assessment that under standard training circumstances the ground-truth reachable parameter set (and therefore our reachable parameter interval) will become vacuous as the number of training iterations grows. Yet, in cases where only fine-tuning is needed or where only a small proportion of the training data is untrusted, our bounds ought to remain tight and practically useful. We hope that future works will improve the approximations we make and/or will modify the training procedure/model itself to make it less susceptible to poisoning adversaries.\", \"on_the_question_of_application_to_other_architectures_and_models\": \"Prior and concurrent works on bound propagation for neural networks have demonstrated the ability to bound robustness properties in complex architectures including convolutional neural networks [1] and transformer architectures [2]. Outside of neural networks, similar bounds to ours have been investigated for gradient-boosted decision trees [3]. Application and analysis using our framework can be carried out in this setting, though one must carefully employ our Definition 1 and Definition 2 in order to compute bounds on gradients.\\n\\n\\n\\n[1] - https://arxiv.org/abs/1811.12395\\n[2] - https://dl.acm.org/doi/10.1145/3453483.3454056\\n[3] - https://arxiv.org/abs/1906.03526\"}", "{\"metareview\": \"This paper explores the problem of providing provable robustness guarantees to training-time adversarial attacks. The proposed framework certifies robustness against poisoning and backdoor attacks by bounding the set of all reachable parameters, with worst-case guarantees on model performance and attack success rate.\\n\\nReviewers were interested in the approach and found the ideas fairly novel. However, there were major concerns that the bounds given by the proposed method were extremely loose and hence may not be meaningful in practice. This point was not resolved through the rebuttals. Also, the experiments were not well grounded in prior research since the authors did not use baseline methods from the literature. While this was partially added during rebuttals, the experiments have yet to be presented with sufficient rigor.\\n\\nThe work is promising, and the authors have worked to improve it during this review process, but the changes required are not quite complete, and are too major to be accepted as is. I am recommending rejection, but want to encourage the authors to resubmit to a comparable conference after taking the reviewer\\u2019s feedback into consideration.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers asked about the practical scalability of the approach, and during the rebuttal the authors both shared code, and discussed a few small scale experiments on computational cost. These are both beneficial, but the experiments as presented are informal and should be fleshed out in a future submission. This type of evidence would be more convincing if run on larger scale datasets.\\n\\nReviewers were concerned that the bounds given by the proposed method seem to be extremely loose and hence may not be meaningful in practice. This was not satisfactorily solved by the authors during the discussions.\\n\\nReviewers found that the experiments and discussions in the paper were not well grounded in prior research. For example there was a lack of comparable baselines used for comparisons. The proposed method takes an orthogonal approach to robustness, and it was discussed that no existing methods are directly comparable. However, this is not a good excuse to leave out all baselines. The authors should instead strive to discuss what the alternative methods are, and be clear about the differences with their approach, while at the same time providing empirical comparisons in the most fair way possible. This would enable the reader to make informed judgements on the relative value and quality of each type of robustness guarantee.\\n\\nI note that the community often criticizes new approaches for not performing on par with incumbents, while ignoring that incumbents have had years of tuning and engineering to optimize these aspects. The actual performance of the proposed method compared to incumbents like randomized smoothing is not a major factor to me in my final decision. However, the lack of any comparisons is a major concern.\"}", "{\"comment\": \"We thank the reviewer for their consideration of our submission and their useful feedback. Below we address the weaknesses that they offer.\\n\\n## Proofs\\n\\nWe thank the reviewer for pointing out the missing proofs and mislabelled proposition. We have changed the labels on Propositions 1 and 2 to be the same, since they refer to the same result. Theorems 3.3 and B.1 present descent direction bounds for the bounded and unbounded adversaries, respectively. We have included proofs to all theorems and propositions in the updated manuscript.\\n\\n## Lack of scalability\\n\\nWhile the reviewer is correct in their assessment that our proposed methodology over-approximates the worst-case parameter interval, it is not the case that the corresponding guarantees are vacuous. In particular, we point the reviewer to our later experimental results, such as UCI regression, OCT-MNIST, and PilotNet, where non-vacuous guarantees are provided for poisoning attacks of up to m=10000, m=600, and m=600, respectively.\\n\\nAs acknowledged extensively in the paper, our proposed algorithm has particular problem classes for which the bounds are relatively weaker, such as certain classification and training settings. We agree that in these settings our guarantees may not be practically useful, for example the weak guarantees for label-flipping in the halfmoons or MNIST datasets. However, these results are included both to highlight the particular advantages and limitations of our method, and as a starting point for future work. Additionally, the weakness of our method on a particular setting does not preclude the strong guarantees that we observe for other attack settings.\\n\\nOutside the practical application of our robustness certificates, our approach offers a novel theoretical testing paradigm to understand the sensitivities of standard learning algorithms and data pipelines to poisoning attacks. As an example, though there exists a practical gap between our certificates and real poisoning attacks (Appendix H), models with weaker guarantees appear to be more susceptible to attack when compared with models with stronger guarantees (Figures 7 and 8). Therefore, even weak bounds can provide comparative insights into the robustness of training pipelines.\\n\\n## Practicality\\n\\nThe reviewer is correct that obtaining an exact parameter interval is not tractable. However, Algorithm 1 provides a means to compute an over-approximate parameter interval. This is looser than the true parameter interval but may still be useful in many settings, as addressed above. The over-approximate reachable parameter set is indeed a valid bound against all potential attacks (see Theorem 3.2). \\n\\nRegarding model architecture, while in this paper we focus on feedforward networks, our algorithm can be extended to general architectures in the future.\\n\\n## Attack goals\\n\\nWe thank the reviewer for pointing out a lack of clarity in Section 2.2. We note that \\u201cuntargeted\\u201d poisoning attacks specifically target the training loss with the goal of denial of service / preventing model convergence. This is following the taxonomy of poisoning attacks in [1]. Other poisoning attack goals, namely targeted poisoning and backdoor attacks, are evaluated over a separate test dataset. All results in the experimental sections are reported with respect to test datasets.\\n\\nRegarding backdoor attacks, the loss as formulated represents the adversary's goal assuming any of the test data may be backdoored, and does not specify the clean data. As the \\\"defenders\\\" against the attack, we do not know a priori which of the test data is targeted by a backdoor attack. Thus, the formulated loss represents the performance on the test data assuming each point may be targeted. In practice, an attacker may only target a certain subset of the test data, or have more complicated attack goals. If this subset of data is known, it is trivial to use our parameter-space bounds to compute certificates on the clean and backdoored losses separately.\\n\\n[1] Tian et al. 2022. \\u201cA Comprehensive Survey on Poisoning Attacks and Countermeasures in Machine Learning\\u201d\\n\\n## Closed-form gradient bounds\\n\\nSolving the optimization problem in Eq. (10) is in general not tractable for neural network models. Instead, we provide an approach for soundly bounding the optimal value of this optimization problem via bounds-propagation. Bounding Eq. (10) requires the following steps:\\n\\n1. Compute bounds on the output of the neural network using our CROWN-based linear bound propagation algorithm.\\n2. Given the bounds on the output of the neural network, compute bounds on the gradients of the network using interval bound propagation.\\n\\nIn principle, step 1 proceeds as in [2], but with our modified expressions for the equivalent weights and biases. Step 2 follows the interval bound propagation procedure as outlined in Appendix F.\\n\\n[2] Zhang et al. 2018. \\\"Efficient neural network robustness certification with general activation functions.\\\"\"}", "{\"comment\": [\"We thank the reviewer for sharing their detailed feedback on our submission, which has helped us clarify and strengthen several aspects of the submission. Below, we address their comments in detail.\", \"## Weaknesses\", \"We agree with the reviewer that a direct comparison with existing certified training algorithms would help demonstrate both the advantages and limitations of Abstract Gradient Training. Therefore, we have added Appendix J.1, which provides a direct comparison between AGT and a DPA, a popular aggregation-based certified training algorithm. We temporarily include the figure from (Levine & Feizi, 2020), and are working on reproducing the results of DPA ourselves for the final version of the paper. We note that we have not provided a comparison to randomised smoothing approaches, as they provide only probabilistic guarantees of robustness.\", \"Regarding reproducibility, we provide the full source code for our experimental results at the following anonymized link https://anonymous.4open.science/r/agt-DAAB. The source code will additionally be published with the final version of this paper.\", \"Our certification framework is in fact applicable to any first-order optimization algorithm, such as SGD variants (e.g. momentum-based methods) and Adam. The reviewer is correct that Algorithm 1 only applies to vanilla SGD, but it can be extended to other optimization algorithms. This does indeed require reachability analysis of the optimizer's internal state (e.g. storing valid bounds on the moments at each iteration), but this fits within our framework, requiring small changes to Algorithm 1. We have provided an example of bounding SGD with momentum in the linked source code (under abstract_gradient_training/optimizers.py).\", \"## Minor Weaknesses\", \"We have updated Appendix J with a comparison of the run-time of AGT vs standard training using Pytorch.optim. We highlight the following results here: For the UCI regression dataset (Figure 2), each certified training run takes approximately 50 seconds using our implementation of Abstract Gradient Training. For comparison, training the same model in vanilla pytorch takes approximately 20 seconds. On the other hand, fine-tuning PilotNet (Figure 5) in AGT takes ~110 seconds, compared to 80 seconds in Pytorch. Thus in practice AGT comes with only a modest computational penalty.\", \"As written, the definition of safe output S in Eq. (4) does not take into account data-dependent safety. As noted by the reviewer, this is not a fully general setting and the safe set for a particular prediction may depend on the ground truth label, for example. It is simple to generalise our certificates to this setting, and notationally we will adapt our formulation by adding S(x, y) to the definitions in Section 2.2.\", \"The definition of backdoor attack goal does indeed not take into account the performance on unperturbed samples. However, as the \\\"defenders\\\" against the poisoning attack, we do not know a priori which of the test data is targeted by a backdoor attack. Thus, the formulated loss represents the performance on the test data assuming each point may be targeted, which is a worst case assumption which may be loose in practice.\", \"The prior work of Wicker et. al. 2020 computes a per-input safe-weight set in order to certify probabilistic safety for BNNs. Despite being different overall problems, the key computational differences between that approach and ours are (1) we must compute bounds on the gradient thus requiring propagation both forwards and backwards through the network architecture (2) Wicker et. al. 2020 builds a \\\"safe-weight set\\\" given an input-output specification and trained machine learning model whereas we compute the reachable parameter set of the training algorithm itself. The key similarity is that if one replaces the sampling approach in Wicker et. al. 2020 (used to build a safe weight set) to be the output of AGT then the approach of Wicker et. al. can be used to bound test-time adversaries in our poisoning setting. But this still requires AGT, our primary contribution.\", \"## Questions\", \"1. We thank the reviewer for pointing out this issue. The reviewer is correct that $\\\\max(n, m)$ should only be used in a \\u201cpaired\\u201d poisoning setting, where the same data points must be chosen for feature and label poisoning. In the case of the non-paired setting, $m + n$ should be used in place of $\\\\max(n, m)$ in Theorem 3.3. We have corrected this in the revised copy of the paper, along with re-running the affected plot (Figure 1.d).\", \"2. In principle, many neural network verification algorithms can potentially be extended to fit within our framework. Indeed, $\\\\alpha,\\\\beta$-CROWN in particular requires only a small tweak to the CROWN-style bounds we present in our paper. However, extending other verification algorithms for use in Abstract Gradient Training is outside the scope of this work. CROWN was chosen for this work as a trade-off between complexity and tightness.\"]}", "{\"comment\": \"We appreciate the reviewers time and continued engagement with our paper. We believe the most recent response suggests a misunderstanding of the overall purpose of our contribution, which we should have clarified.\\n\\nOur work intends to develop a formal proof, i.e., a sound certification that an attacker cannot achieve a poisoning goal. For instance, from the second sentence of our abstract: \\u201cProvably bounding model behavior under such attacks remains an open problem.\\u201d\\n\\nWe note that while influence functions represent a valuable form of analysis for poisoning attacks, they cannot be used to certify this robustness and therefore do not constitute an alternative solution to our problem of interest.\\n\\nWe appreciate the reviewers concern about the tightness of our verification approach; however, as we have discussed with Reviewer yEud, our experimental results on OCTMNIST and PilotNet suggest that though over-approximate (to ensure soundness), our bounds are still both valid and useful.\"}", "{\"summary\": \"This paper attempts to provide provable guarantees on the behaviour of models trained with potentially manipulated data without modifying the model or learning algorithm. The method uses convex relaxations to over-approximate the set of all possible parameter updates for a given poisoning attack, and provide bounds on worst-case behaviour.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper proposes a framework, with a (claimed to be novel) bound propagation strategy, for computing bounds on the\\ninfluence of a poisoning adversary on any model trained with gradient-based methods, from the above, a series of proofs are suggested to bound the effect of poisoning attacks, finally, empirical evaluation are proposed to illustrate the approach.\", \"weaknesses\": \"While I might have missed the rationale for computing the set of all reachable trained models given how loose such (worst-case) bounds can be, the approach seems to claim novelty while overlooking important previous contributions, in particular, the use of influence functions in robust statistics, and their revival in modern machine learning.\\n\\nCook, R. D. Detection of influential observation in linear regression. Technometrics, 19:15\\u201318, 1977.\\n\\nCook, R. D. Assessment of local influence. Journal of the Royal Statistical Society. Series B (Methodological), pp. 133\\u2013169, 1986.\\n\\nCook, R. D. and Weisberg, S. Characterizations of an empirical influence function for detecting influential cases in regression. Technometrics, 22:495\\u2013508, 1980.\\n\\nCook, R. D. and Weisberg, S. Residuals and influence in regression. New York: Chapman and Hall, 1982.\\n\\nKoh, P.W. and Liang, P., 2017, July. Understanding black-box predictions via influence functions. In International conference on machine learning (pp. 1885-1894). PMLR.\", \"questions\": \"My main question would be on how this approach compares (in terms of computing the exact influence of a poisoned dataset) to the finer analysis in the references provided above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"We thank all of the reviewers for the detailed reviews. We believe that the edits in response to these reviews have improved the clarity of our work. We have provided a revised version of the manuscript with the following changes (highlighted in blue):\", \"Added proofs for Theorem B.1, Theorem 3.3 and Proposition 1.\", \"Added wall-clock time comparison to Appendix J.\", \"Added direct comparison with aggregation-based algorithms.\", \"Collected all proofs into a dedicated appendix (I) for clarity.\", \"Changed $\\\\max(n, m)$ to $n + m$ in Theorem 3.3, and re-run the affected plot (Figure 1, panel d).\", \"Added clarification to Section 2.2 regarding test vs training time attack goals and data-dependent safe-sets.\", \"Corrected the heading of Figure 4.\"]}", "{\"summary\": \"This submission considers the problem of providing provable robustness guarantees to training-time adversarial attacks. Specifically, the considered perturbation models admits changes to a constrained number of $n$ feature vectors and $m$ labels, with optional constraints on their norm. The adversary's objective is to perform a succesful targeted or untargeted poisoning attack, or a backdoor attack.\\nUnlike prior work, the authors propose to generalize existing convex relaxation based methods for neural network verification to (1) determine a reachable set of model parameters at training time and (2) at inference time determine a reachable set of model outputs given this parameter sets. \\nThe reachable set of model parameters is iteratively expanded with each gradient step.\\nGiven the perturbation model and bounds from the previous iteration, lower and upper bounds for the gradients are determined via convex relaxation while (1) considering worst-case choices of inputs and parameteers for $max(m,n)$ inputs and (2) worst-case choices of parameters for the remaining inputs. \\nThe strength of the resultant bounds for different perturbation model parameters are evaluated by training a model from scratch on a simple regression dataset and dimensionality-reduced MNIST, as well as fine-tuning on two other vision datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"This submission proposes a novel solution to an extensively studied problem (susceptibility to training-time attack), which has received renewed attention due to the current excitement around language models.\", \"The problem is well-motivated via references to prior work on poisoning attacks.\", \"The method is clearly distinguished from prior aggregation-based approaches.\", \"The idea of applying iterative reachability analysis to SGD is very natural. It is somewhat surprising that no one has tried this ansatz before, i.e., the work closes a very obvious gap in existing literature.\", \"Rather than proposing a completely new relaxation approach, the work expands the well-established CROWN method (soundness appears to be good).\", \"The method is applicable to a very general perturbation model.\", \"The in-depth discussion of the method and its limitations in Section 3.4 is commendable.\", \"The work opens various obvious avenues for future work that could be of interest for the wider robust ML community (refining single- and multi-neuron bounds, generalizing beyond feed-forward networks, enforcing consistency of perturbations between gradient steps etc.)\", \"The manuscript is well-structured, beginning with a high-level approach (Paramaeter-space certification) and a specific instantiation of this approach (Abstract gradient training), before discussing the technical details of its individual components.\"], \"weaknesses\": [\"I agree with the authors' argument that the approach taken by this method is orthogonal to prior work. Nevertheless, the experiments would be significantly more informative if one were to use prior work as baselines. Instead of demonstrating \\\"the method for verifying robustness does in fact verify robustness for sufficiently small perturbations\\\", one could demonstrate \\\"the verification method fills a useful niche in the certified-accuracy/runtime space for certain dataset / model sizes\\\".\", \"The authors claim that their procedure was applicable to any first-order optimizer like Adam (see ll.198-199). This claim appears to be incorrect, since it would require a reachability analysis of the optimizer's internal state (moments etc.). Instead, the method appears to be limited to vanilla SGD.\", \"The submission does not include an implementation. Since the proposed method is rather involved, and implementing verifiers requires significant care, this significantly hinders reproducibility. I would strongly encourage the authors to provide code.\", \"Even if one were to try and reimplement the method, the description of hyperparameters and other aspects of the experimental setup is hardly sufficient to reproduce any of the results (see Appendix J). I would strongly encourage the authors to upload experiment configurations with their code.\", \"The discussion of related work is mostly relegated to Appendix A. This is an integral part of the paper and should (maybe with slightly more concise writing) be moved to the main text.\", \"**Minor weaknesses**:\", \"While the authors provide asymptotic bounds on runtime complexity, they do not specify the wall-clock time needed for veryifing the different models in the experiment section. This would make it easier to gauge the practical feasibility of the proposed method.\", \"Based on the asymptotic runtime complexity, the use of small-scale models, and fine-tuning instead of training from scratch, the method appears to be hardly scalable to real-world settings -- except for some special cases. I do, however, not consider this a major weakness, as this work represents a first step in a novel research direction.\", \"The definition of targeted attacks (Eq. 4) assumes the existence of a single, data-independent set of safe outputs $S$. However, what constitutes a safe output may often be data-dependent (e.g., whether steering left/right is safe depends on whether the ground truth direction is left/right in the PilotNet dataset from Fig. 5). It should be easy to generalize the proposed method to this use case, since one just has to evaluate the model output bounds differently.\", \"The definition of backdoor attacks (Eq. 5) does not enforce that the model should have high accuracy on unperturbed samples. As such, the guarenteees are likely to be very loose upper bounds on unnoticable backdoor attacks. It would probably be better to solve a constrained optimization problem over the set of reachable model parameters.\", \"Prior work on parameter-level verification for Bayesian NNs is mentioned, but it is not explained why these methods are not applicable to the considered problem. For instance, Wicker et al. (2020) [1] appear to determine a \\\"safe\\\" set of weights such that only some desired set of outputs is reachable, which is essentially inverse to Theorem 3.1.\", \"---\", \"[1] Wicker et al. Probabilistic Safety for Bayesian Neural Networks. UAI 2020.\", \"---\", \"Overall, I believe that this submission is a meaningful, well-motivated contribution to the field of robust machine learning. It proposes a very natural approach to robustness certification under training-time attacks that closes an obvious gap in existing literature.\", \"The submission is primarily held back by poor reproducibility and somewhat uninformative experiments, which do not provide any insights about when parameter-space certification is preferable to existing aggregation-based approaches.\", \"I thus recommend borderline/weak acceptance. Should the authors address either of these weaknesses (e.g. by sharing a link to anonymized code or comparing to some aggregation / randomized smoothing method on MNIST), I will further increase my score (even if the experimental results are negative).\", \"**EDIT**: Following the rebuttal, which addresses both of my primary concerns, **I have decided to increase my score to 8 (accept)**.\"], \"questions\": [\"Shouldn't we use $\\\\mathrm{SEMin}_{m+n}$ in Theorem 3.3, since the $m$ feature vectors and $n$ labels that are perturbed can belong to disjoint sets of samples (see see ll.117-118)?\", \"Aside from increased computational cost, are there any reasons why you decided to extend CROWN instead of more recent methods like ($\\\\alpha,\\\\beta$)-CROWN, MN-BAB etc.?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for their detailed response.\\n\\nI agree with the point that this paper provides a tool to understand the sensitivities of standard learning algorithms and insights into their robustness. The experimental results on OCTMNIST and PilotNet also demonstrate that the certification is valid. \\n\\nNevertheless, the tightness of practical evaluation of such bound remains concerning. It looks like the reachable parameter interval will explode after a few iterations as it grows larger and larger, and it is suprising for me to see the results on PilotNet is non-vacuous. Also, does the calculation of $\\\\Delta\\\\Theta$ in Algorithm 1 (line 6), or solving Eq. (10), realistic for models other than feed-forward neural networks? This paper's solution to this optimization step seems based on Prop 1, which is specific to NNs. \\n\\nOverall, I increase my score to 5. I am willing to further increase the score if the authors can address the remaining concerns.\"}", "{\"comment\": \"We thank the reviewer for bringing to our attention the previous literature regarding influence functions. Indeed influence functions are relevant and have been previously applied to the problem of training-set adversarial attacks. We will update our related works section with these in consideration.\\n\\nWe now turn to pointing out the differences between their approach and our approach. In particular, we note that influence functions provide only an approximate measure of the influence of a training point, whereas our method aims to provide certificates (i.e. formal, non-probabilistic, guarantees) of the maximum influence a training point can have. Paraphrasing [1], influence functions give an efficient approximation to the change in the loss given a small perturbation to a training point. [1] further performs a series of approximations to compute this sensitivity for neural network models. Towards data-poisoning, they then use their approximate influence function to compute training-set attacks by perturbing existing training data (i.e. taking the role of the adversary).\\n\\nIn contrast to this, the reachable parameter set computed by our algorithm takes a distinctly different approach. Firstly, we compute a sound reachable set, that is for any training data within the allowable perturbation, the parameters of the model are provably guaranteed to lie within the reachable set. Additionally, our approach leverages convex-relaxation and bound propagation, while influence functions (in the context of [1]) rely on quadratic approximations to the empirical loss function. Finally, our approach aims to provide certified guarantees on trained models, taking the role of the model owner / \\\"defender\\\".\\n\\nWe hope that this addresses the reviewers questions and concerns and are happy to answer any further questions or discuss further concerns the reviewer might have.\\n\\n[1] Koh, P.W. and Liang, P., 2017, July. Understanding black-box predictions via influence functions. In International conference on machine learning (pp. 1885-1894). PMLR.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you. This addresses both of my primary concerns (reproducibility and comparison to aggregation-based methods w.r.t. runtime and certified accuracy).\\nI have increased my score accordingly.\"}", "{\"summary\": \"This paper proposes to certify the robustness of a model trained on possibly poisoned dataset through the robustness of its parameters. Specifically, the model performance is bounded by the worst-case performance over all possible parameters that can be trained over all possible poisoned datasets under a capability constraint on the attacker. The authors therefore propose to certify a parameter-space bounds, and provide a practical algorithm when the model is a fully-connected neural network. The authors demonstrate their certified accuracy through the proposed method on various datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-organized and easy to follow.\\n2. This paper provides an alternative perspective to certify the model performance through the robustness of model parameters.\\n3. The topic of AI security and robustness regarding malicious model attacks attracts a signficant amount of attention in recent years.\", \"weaknesses\": \"**Major Concerns:**\\n\\n1. While it is intuitive that the variation of model parameters can serve as a tool to evaluate its robustness, the guarantee it could provide is extremely loose, at least as presented in this work. In particular, the authors consider the worst-case parameter interval against all possible poisoning dataset as in Eq. (6), and relax it further to Eqs. (8) and (9). It is questionable how useful such a valid but loose bound will be. This concern is to some extent confirmed by the experimental results. Examples include (i) in Fig 1, either a small perturbation level or a small number of poisoning data points is considered to avoid meaningless result. In Fig 1(c), flipping only 4 labels causes an almost void certified bound. (ii) in Appendix H, the authors acknowledge that their bound is extremely loose compared to real-world attacks and corresponding model performance. \\n\\n2. The practical evaluation of Eq. (6). As acknowledged by the authors in Section 3, obtaining an parameter interval by Algorithm 1 is intractable in general. The authors provide an approach to solve Eq. (10) for fully-connected neural networks only. It seems too ambitious to derive the reachable parameter interval against all potential attacks. Even if we can somehow approximate it, as mentioned in the first point, this interval may turn out to be vacuous. \\n\\n3. Attack goals formulated in Section 2.2. While authors claim the goal of the attacker is to maximize the training loss such as Eqs. (3), (4) and (5), it makes more sense to consider controlling the generalization error or the test loss, because people are generally more interested in the test performance. To this end, the experiments should also report the certified and true performance on test datasets. Moreover, Eq. (5) is more similar to adversarial attacks instead of backdoor poisoning. A backdoor poisoning attack has dual goals of preserving the performance of clean data and ensuring the accuracy of the backdoored data in terms of the target label.\\n\\n**Minor Concerns:**\\n1. It is unclear to me what is the close-form solution to Eq. (10) for neural networks in terms of $\\\\epsilon, \\\\nu, n, m$. An algorithm explaining these details will help readers better understand it.\\n\\n2. Some results are neither proved or properly referred, such as Theorem 3.3 and B.1, and Proposition 1 and 2.\", \"questions\": \"Please see the Weakness section above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
ExHUtB2vnz
INFER: A Neural-symbolic Model For Extrapolation Reasoning on Temporal Knowledge Graph
[ "Ningyuan Li", "Haihong E", "Tianyu Yao", "Tianyi Hu", "Yuhan Li", "Haoran Luo", "Meina Song", "Yifan Zhu" ]
Temporal Knowledge Graph(TKG) serves as an efficacious way to store dynamic facts in real-world. Extrapolation reasoning on TKGs, which aims at predicting possible future events, has attracted consistent research interest. Recently, some rule-based methods have been proposed, which are considered more interpretable compared with embedding-based methods. Existing rule-based methods apply rules through path matching or subgraph extraction, which falls short in inference ability and suffers from missing facts in TKGs. Besides, during rule application period, these methods consider the standing of facts as a binary 0 or 1 problem and ignores the validity as well as frequency of historical facts under temporal settings. In this paper, by designing a novel paradigm for rule application, we propose INFER, a neural-symbolic model for TKG extrapolation. With the introduction of Temporal Validity Function, INFER firstly considers the frequency and validity of historical facts and extends the truth value of facts into continuous real number to better adapt for temporal settings. INFER builds Temporal Weight Matrices with a pre-trained static KG embedding model to enhance its inference ability. Moreover, to facilitates potential integration with existing embedding-based methods, INFER adopts a rule projection module which enables it apply rules through conducting matrices operation on GPU. This feature also improves the efficiency of rule application. Experimental results show that INFER achieves state-of-the-art performance on various TKG datasets and significantly outperforms existing rule-based models on our modified, more sparse TKG datasets, which demonstrates the superiority of our model in inference ability.
[ "Knowledge Graph", "Temporal Knowledge Graph", "Temporal Rules", "Temporal Validity" ]
Accept (Poster)
https://openreview.net/pdf?id=ExHUtB2vnz
https://openreview.net/forum?id=ExHUtB2vnz
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qyN9sIWND6", "n5yg7c6wdf", "mERDCGopcn", "k3WuCerHsM", "XI0hRPSFTA", "VbvSX3TCpX", "V6bibdrd9r", "V6McX9badY", "TkKYmDTcar", "QPRwqILPAJ", "OxqrikMvIC", "OsOtEQQaiy", "OBCMt6Dgyq", "Io8RxkYN2n", "D74IxPNGIS", "B08ROKLMpg", "AvBNGk54O3", "5jaXQvsboh", "3xGNYBZmhQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1735028523802, 1732243017015, 1732778548374, 1732634205042, 1730585802817, 1731770021401, 1731772446895, 1731776226560, 1731770503228, 1732414308267, 1731770067365, 1732180542183, 1737523739189, 1731773987476, 1730377868859, 1730471032670, 1732414738552, 1730688517103, 1731772516921 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6021/Area_Chair_GXN5" ], [ "ICLR.cc/2025/Conference/Submission6021/Reviewer_agps" ], [ "ICLR.cc/2025/Conference/Submission6021/Authors" ], [ "ICLR.cc/2025/Conference/Submission6021/Reviewer_qjZA" ], [ "ICLR.cc/2025/Conference/Submission6021/Reviewer_cyuQ" ], [ "ICLR.cc/2025/Conference/Submission6021/Authors" ], [ "ICLR.cc/2025/Conference/Submission6021/Authors" ], [ "ICLR.cc/2025/Conference/Submission6021/Authors" ], [ "ICLR.cc/2025/Conference/Submission6021/Authors" ], [ "ICLR.cc/2025/Conference/Submission6021/Authors" ], [ "ICLR.cc/2025/Conference/Submission6021/Authors" ], [ "ICLR.cc/2025/Conference/Submission6021/Reviewer_EcyD" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6021/Authors" ], [ "ICLR.cc/2025/Conference/Submission6021/Reviewer_qjZA" ], [ "ICLR.cc/2025/Conference/Submission6021/Reviewer_EcyD" ], [ "ICLR.cc/2025/Conference/Submission6021/Authors" ], [ "ICLR.cc/2025/Conference/Submission6021/Reviewer_agps" ], [ "ICLR.cc/2025/Conference/Submission6021/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"The proposed method INFER offers a promising approach to temporal knowledge graph reasoning. The majority of the reviewers\\u2019 concerns have been addressed, and the authors have provided sufficient clarifications, additional experiments, and analysis to strengthen the manuscript. Given the promising results, the addressed concerns, and the contributions made by the authors, I recommend accepting the paper.\", \"additional_comments_on_reviewer_discussion\": \"The authors have made significant revisions to their manuscript based on the feedback provided by the reviewers. The following changes were made in response to the reviewers' concerns:\", \"clarification_on_interpolation_and_extrapolation_reasoning\": \"The authors provided additional explanations on the Interpolation and Extrapolation Reasoning concepts, as suggested by Reviewer agps.\", \"clarification_on_variable_constraints_and_efficiency\": \"The authors have added examples and illustrations for Variable Constraints, as well as expanded their explanation of the efficiency analysis, addressing Reviewer cyuQ's concerns.\", \"experiments_on_rule_lengths\": \"The authors included experiments evaluating the performance of INFER using rules with different lengths, as suggested by Reviewer EcyD.\", \"new_datasets\": \"The authors extended their experiments to include additional TKG datasets, such as WIKI and YAGO, as recommended by Reviewer qiZA.\\n\\nAfter the rebuttal, two reviewers raised their scores. Although the reviewers did not reach a consensus on accepting this paper, all agreed that the novelty of the approach is solid. This is why I recommend accepting the paper.\"}", "{\"comment\": \"I thank the authors for addressing my questions. I will raise my score.\"}", "{\"title\": \"General Response\", \"comment\": [\"We would like to thank the reviewers for their constructive feedbacks and we upload our revised PDF based on the discussions we have with all reviewers. Main changes we made in the revised version are (changes are highlighted with brown font):\", \"Add clarification about Interpolation Reasoning and Extrapolation Reasoning mentioned by Reviewer agps.\", \"Add examples and illustration for Variable Constraints and clarify the efficiency analysis part suggested by Reviewer cyuQ.\", \"Add experiments evaluating the performance of INFER using rules with different lengths suggested by Reviewer EcyD.\", \"Add experiments on various TKG datasets: WIKI and YAGO suggested by Reviewer qjZA.\"]}", "{\"comment\": \"Thank you for your response and clarification. I will maintain my score.\"}", "{\"summary\": \"The article presents a detailed neuro-symbolic model addressing the task of Temporal Knowledge Graph Completion, particularly focusing on the challenge of extrapolation. This task aims to infer knowledge for a given Knowledge Graph at time T using only past data. The authors propose a well-structured solution, providing an extensive evaluation that compares their model to current state-of-the-art symbolic and neural methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Comprehensive Evaluation: The paper includes a thorough evaluation of the proposed model, supplemented by an ablation study and an efficiency analysis, making it easy to understand the model's effectiveness.\\n2. Efficient Model Design: The model circumvents the common complexity issue of creating separate matrices per timestamp, resulting in a compact, efficient design.\\n3. Clear and Reproducible Description: The methodology is well-articulated, allowing for straightforward reproduction of the results.\", \"weaknesses\": \"1. Minor Typos and Inconsistencies:\\n - Line 451/452: The term \\\"INFER(Temp)\\\" is used instead of \\\"INFER(Temp Val).\\\"\\n - Line 515: The rule quantity is stated as 40, while Table 1 lists it as 60, which could lead to some confusion.\\n2. Efficiency Study Observations: The conclusions drawn in the efficiency study are somewhat unclear. Specifically, a competing model achieving a similar score while exploring fewer candidates might suggest that the alternative approach is better optimized in its candidate selection process. Clarifying these efficiency aspects would strengthen the study.\\n3. Completeness of Graph Argument: The argument concerning graph completeness and the slope behavior appears overstated, as the trend lines for the three methods are quite similar. Revising this interpretation could enhance clarity.\\n4. Ambiguity in Section 4.3.3: The fourth paragraph lacks clarity regarding \\\"variable constraints in rules,\\\" which limits the reader's understanding of the approach. A clearer explanation or example would be beneficial here.\", \"questions\": \"1. In Section 4.3.2, the function $V(s, r, o, t_c)$ is introduced. However, it is unclear how the model handles $t_{last}$ if the fact $<s, r, o>$ was never previously observed. Is the value set to 0, potentially conflicting with timestamp 0, or is another value assigned?\\n2. Could the authors elaborate on the benefits of examining a significantly higher number of candidates (100x), especially given the notable increase in runtime (an additional 500 seconds)? Understanding the trade-offs would be helpful.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer agps\", \"comment\": \"Thanks for your thorough feedbacks which help us to polish this paper . Here's our clarification and replies to questions in weaknesses:\\n\\nReply to W#1 and Q#1:\\n\\nWe have two main motivations(the improvements of rule application efficiency is **not** one of them and it\\u2019s more like a advantage of our paradigm instead of the starting point of the design of INFER):\\n\\nFirstly, traditional rule-based models, when applying rules through path matching, do not consider the validity of historical facts and treat recent facts and ancient facts equally (lines 82-84 in our paper). Secondly, traditional rule-based models are greatly affected by missing information, leading to inference bottlenecks. To address the first issue, we propose a temporal validity function that considers the time span and frequency of historical facts, extending the truth value of facts from binary to continuous and modeling fact validity. The main experiment and ablation experiment verify this motivation. To address the second issue, we introduce a static KG embedding model, which captures structural information through pre-training. We then propose the entire INFER paradigm, which combines embedding information with improved rule application by constructing and updating the Temporal Weight Matrix and proposing Projection operator. The resulting neural-symbolic method gains the ability to reason about missing information and its performance is less affected under incomplete settings compared to traditional rule-based models (Results in Table 2). Meanwhile, the efficiency of applying rules is greatly enhanced through performing matrices operations instead of path matching. More importantly, our model provides a new paradigm to combine embedding-based methods and symbolic-based methods paving the way for future research.\\n\\nBack to your question, after reading these two papers, we believe that the most fundamental difference between our model and Neural-LP lies in the fact that our proposed method is not learnable and does not require a data-driven training process. Undeniably, our model and TensorLog are indeed similar in essence to a certain extent, and we apologize for neglecting this paper during our research. However, there are still some differences between them. The most significant one is that the relation matrix constructed by TensorLog is **binary**, with values stored as either 0 or 1. Therefore, TensorLog's computational method involves direct matrix multiplication, whose result denotes entities reachable through rules, and is the \\u201cOR\\u201d of each path. In contrast, our model extends the truth values of events from 0 and 1 to **continuous numerical values**. It is unreasonable to directly compute values through matrix multiplication. Hence, we introduce the Max operation in our computational method.\\n\\nReply to W#2: \\n\\nThank you for mentioning those two papers. After reading them, we found that the two models you mentioned are targeted at the task of **Interpolation Reasoning** on Temporal Knowledge Graphs (TKGs), while the INFER model we proposed is targeted at the task of **Extrapolation Reasoning** on TKGs. The difference between the two lies in that:\\n\\nGiven a temporal knowledge graph with timestamps varying from $t_0$ to $t_T$, **extrapolation reasoning** is to predict new links for future timestamps. Given a query with a previously unseen timestamp $(s, r, ?, t) (t>t_T)$, the model need to give a list of object candidates that are most likely to answer the query. \\uff08We give the formal task definition at line 168-170 in our paper\\uff09\\n\\nHowever under **interpolation settings**, it does not have the future domain restriction and queries are predicted for time t such that $t_0 \\u2264 t \\u2264 t_T$.\\n\\nFurthermore, as we mentioned in lines 364-366 of our paper, to satisfy the requirements of Extrapolation Reasoning, the data in the dataset is sorted by timestamps and then partitioned accordingly.\\n\\nTherefore, we did not compare INFER with the two models you mentioned, as they are designed for different tasks. I believe this explanation can also answer your questions Q#3 and Q#4. More details can be found in some early **Extrapolation Reasoning** papers: [1], [2].\\n\\n[1]Jin W, et al. Recurrent event network: Autoregressive structure inference over temporal knowledge graphs[J]. arXiv preprint arXiv:1904.05530, 2019.\\n\\n[2] Li Z, Jin X, Li W, et al. Temporal knowledge graph reasoning based on evolutional representation learning[C]//Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval. 2021: 408-417.\"}", "{\"title\": \"Reply to Reviewer EcyD\", \"comment\": \"Thank you for your feedbacks and here are our clarifications about your questions:\\n\\nReply to W#1:\\n\\nThe temporal validity function we proposed is indeed an empirical result. When designing the model, we also considered that using a learnable approach might achieve better performance. However, our current framework is already divided into three stages: rule learning, pre-trained model training, and rule application. Introducing an additional learning and training process for the temporal validity function would make the overall workflow cumbersome compared to previous rule-based methods. Through experiments, we found that designing a fixed temporal validity function can also achieve good results. Therefore, we adopted the current approach. However, we fully agree with your viewpoint that a learnable approach may provide better performance, and we will attempt to explore this issue in future work.\\n\\nReply to W#2:\\n\\nSince our model uses an existing algorithm (TR-Rules) to acquire rules during the rule learning stage, the performance of INFER is dependent on the quality of the learned rules. As can be seen from Table 1, compared to ICEWS14 and ICEWS18, the performance of TR-Rules on ICEWS0515 is the furthest from that of TECHS. This is because ICEWS0515 contains data spanning 10 years and is relatively sparse, which increases the difficulty of capturing stronger causal rules and assessing rule confidence during the rule learning process. Therefore, the relatively low quality of the learned rules affects the final performance of INFER. However, we can see that on ICEWS0515, INFER has significant improvements in MRR and Hits@1 compared to TR-Rules, which demonstrates the effectiveness of our proposed rule application paradigm.\\n\\nAs mentioned in Appendix F, INFER's current inability to effectively model the sequential order of events may, to a certain extent, limit the its current capabilities. We will also explore effective ways to address this issue in our future work. Below is a comparison of INFER's performance when applying rules of different lengths with TR-Rules on the ICEWS14 dataset:\\n\\n**Length 1:**\\n| Model | MRR | Hits@1 | Hits@3 | Hits@10 |\\n| -------- | -------- | -------- | -------- | -------- |\\n| TR-Rules | 40.59 | 30.90 | 46.31 | 59.14 |\\n| INFER | **43.83** | **34.15** | **49.01** | **61.99** |\\n\\n**Length 2:**\\n| Model | MRR | Hits@1 | Hits@3 | Hits@10 |\\n| -------- | -------- | -------- | -------- | -------- |\\n| TR-Rules | 15.28 | 6.40 | 16.82 | 35.29 |\\n| INFER | **20.49** | **10.61** | **22.97** | **42.38** |\\n\\n**Length 3:**\\n| Model | MRR | Hits@1 | Hits@3 | Hits@10 |\\n| -------- | -------- | -------- | -------- | -------- |\\n| TR-Rules | 40.45 | 31.46 | **45.52** | **57.35** |\\n| INFER | **40.69** | **32.48** | 45.40 | 55.99 |\\n\\nIt can be observed that, INFER greatly improves the effect of applying rules of length of 1 and 2 compared to TR-Rules. (Rules with length of 2 also include variable constraints sometimes) As for length of 3, INFER still performs better on MRR and Hits@1. However, the results of INFER have more bias when dealing with rules of length 2 or 3. Because, we use 40 rules during evaluation and there are less high confidence (quality) rules of length 2 or 3 compared with rules of length 1, which means when evaluating rules of length 3, we might use some low quality rules which further lead to undesirable results.\\n\\nReply to W#4:\\n\\nWe fully understand your concerns. We believe that, compared to traditional rule-based methods, the current interpretable process presented by INFER can provide scores for candidates, only lacks the path that leads to each candidate at each step. However, this issue can be resolved. We only need to add a step of argmax operation before line 5 in Algorithm 1, so that we can obtain the specific path indicating which entity from the previous hop leads to each candidate with the highest score. After obtaining such results, the reasoning process we provide will be completely consistent with traditional rule-based methods.\\n\\nReply to Q#1:\\n\\nYes, as we mentioned in line(295,296), it is an empirical results. We tested multiple designs of this function and corresponding results are reported in Appendix D, Table 4.\"}", "{\"title\": \"Reply to Reviewer qjZA #2\", \"comment\": \"Reply to W#4:\\n\\nAs we mentioned in Section 1, line(82-84), \\u201cprevious rule-based methods consider the standing of facts as a binary 0 or 1 problem (have been true or not) for path matching, which ignores the validity as well as frequency of historical facts under temporal settings\\u201d. The reason why we have the above claim is that when applying rules, previous methods search for the stands of rule bodies through paths matching on the graphs. If we consider the knowledge graphs as weighted graphs, then normally there are only two kinds of weights: 0 and 1. 0 for the facts that have never appeared, i.e. there\\u2019s no edges between two entity nodes. 1 for the facts that have been true in history, i.e. there\\u2019s an edge of a specific relation between two entity nodes. In this paper, we argue that this setting is not suitable for TKGs. Details can be found in line 97-94 and 107-109.\\n\\nSo back to your question, \\u201ctraditional binary truth values for historical facts\\u201d in line 426 means that we assign 1 to all historical facts as long as it appeared before, otherwise we do not update its value in the temporal weight matrices.\\n\\nReply to W#5:\\n\\nHere are the results of INFER on WIKI and YAGO, as we can see INFER still outperforms existing methods and significantly improves the rule-based baselines on WIKI. As for YAGO, our model and other rule-based methods fall short compared with embedding-based methods. As mentioned in [1], some relations in YAGO can not be modeled by cyclic rules which take up to 10% of the test set. Since our model only uses cyclic rules, it is affected to a certain extent. However, INFER still gain improvements compared with TLogic.\\n\\n**WIKI:**\\n| Model | MRR | Hits@1 | Hits@3 | Hits@10 |\\n| -------- | -------- | -------- | -------- | -------- |\\n| REGCN | 78.53 | 74.50 | 81.59 | 84.70 |\\n| TITER | 73.91 | 71.70 | 75.41 | 76.96 |\\n| TECHS | 75.98 | - | - | 82.39 |\\n| TLogic | 78.93 | 73.05 | 84.97 | 86.91 |\\n| INFER | **86.48** | **85.09** | **87.38** | **88.67** |\\n\\n\\n**YAGO:**\\n| Model | MRR | Hits@1 | Hits@3 | Hits@10 |\\n| -------- | -------- | -------- | -------- | -------- |\\n| REGCN | 82.30 | 78.83 | 84.27 | 88.58 |\\n| TITER | 87.47 | 80.09 | 89.96 | 90.27 |\\n| TECHS | **89.24** | - | - | **92.39** |\\n| TLogic | 78.76 | 74.31 | 83.38 | 83.72 |\\n| INFER | 83.74 | 83.54 | 83.72 | 84.31 |\\n\\n[1] Confidence is not Timeless: Modeling Temporal Validity for Rule-based Temporal Knowledge Graph Forecasting (Huang et al., ACL 2024)\\n\\nPlease feel free to reach out if you have any questions or require further clarification. If you find our response satisfactory, we hope you will consider this a valid reason to consider raising your rating.\"}", "{\"title\": \"Reply to Reviewer cyuQ\", \"comment\": \"Thank you for your insightful feedbacks on our paper. We appreciate your crucial suggestions. The followings are our replies to your questions and summarized weaknesses:\\n\\nReply to W#1: \\n\\nThank you for pointing out the issues, and we will make the necessary modifications in the revised PDF. Regarding the number of rules, we provided an explanation for \\\"60\\\" on lines 406-408 of our paper, indicating that it represents the performance of INFER when using 60 rules for testing.\\n\\nReply to W#2:\\n\\nThank you very much for your constructive suggestions, which have helped us improve the expression of our paper. The reason why TLogic uses fewer candidates is due to computational cost constraints, as TLogic sets an **upper limit** of **20** candidates to consider. Once the number of searched candidates reaches 20, the TLogic algorithm will directly stop applying more rules to obtain more candidates, meaning that it does not consider the potential impact of additional rules on the current results. Therefore, it does not provide a superior method for selecting candidates but rather exhibits limited performance due to cost considerations. Our proposed INFER can efficiently consider a wider range of candidates to obtain more accurate results. Thus, we believe that INFER has higher efficiency, which allows it to consider more rules and candidate entities within acceptable time costs, thereby producing more accurate results.\\n\\nReply to W#3:\\n\\nThe similarity in the trends of these lines to some extent is due to the inevitable deterioration in model performance as the proportion of missing facts increases. We enhance the robustness of the rule-based model to this issue by introducing a pre-trained model. As can be seen in Figure 4, the slope of the blue line (INFER) is smaller than the other two lines, especially on ICEWS18, where the slope of the blue line is significantly gentler. This means that the design of our model has mitigated the impact of missing facts on the reasoning results to some extent.\\n\\nReply to W#4: \\n\\nHere we provide an explanation and example regarding Variable Constraints, and we will try our best to include these explanations in the revised version of the PDF.\", \"this_is_a_rule_with_variable_constraints\": \"$(A, support, C, T)\\u2190(A, riot, B, T_1) \\\\bigwedge (B, make statement, A, T_2) \\\\bigwedge (A, resort, C, T_3)$\\n\\nAs we can see in the body of the rule, in the second hop, the entity which B make statement to has to be the same with the entity that riot with B at T1.\", \"if_the_rule_is\": \"$(A, support, C, T)\\u2190(A, riot, B, T_1) \\\\bigwedge (B, make statement, D, T_2) \\\\bigwedge (D, resort, C, T_3)$\\n\\nThen it is a rule without variable constraints which is different with the rule given above. Because the entity which B make statement to can be or can not be the same with the entity riot with B at T1.\\n\\nIn summary, variable constraints mean that in some rules, it is restricted that the same entity need to appear repetitively so that the rule body is satisfied.\\n\\nReply to Q#1:\\n\\nFor facts (s, r, o) that have never appeared, it is meaningless to calculate their Temporal Validity because Temporal Validity measures the validity of a historical fact at the current moment (it should not be considered as a historical fact if it has never appeared). Therefore, we will not update its value in the Temporal Weight Matrices. This means that its corresponding value will be the probability given by the pre-trained model. This process aligns with our motivation to address the bottleneck caused by incompleteness of TKGs. Because the absence of some facts may cause their premises to be invalid, which should have been matched with the bodies of rules.\\n\\nReply to Q#2:\\n\\nOur reply to W#2 can partially address this question. TLogic is limited by the computational cost of path matching, which forces it to stop searching for candidates after a certain number have been found. In contrast, our proposed INFER can efficiently consider a wider range of candidates to obtain more accurate results.\\n\\nIf you are not satisfied with our clarification please feel free to discuss with us.\"}", "{\"title\": \"Looking Forward to Your Reply\", \"comment\": \"Dear Reviewer qjZA,\\n\\nI hope this message finds you well. As the deadline for the reviewer-author discussion draws near, we would like to kindly request your input on our rebuttal. We understand your time is valuable, and we greatly appreciate your initial engagement with our paper. If you could spare a moment to review our responses and consider the clarification and extra experiments we've made, we would be truly grateful. \\n\\nThank you once again for your time. Please feel free to reach out if you have any questions or require further clarification. If you find our response satisfactory, we hope you will consider this a valid reason to consider raising your rating.\"}", "{\"title\": \"Reply to Reviewer agps #2\", \"comment\": \"Reply to Q#2:\\n\\nAs far as we know, no one has ever tried to developed models for TKG extrapolation reasoning based on Neural LP. We are the first to propose a paradigm for integrating rules with embedding methods for TKG extrapolation. There\\u2019s one paper published at ACL2024 proposed a machine learning method for rule confidence learning as we mentioned in line229. However, it focuses on improving the rule learning stage while our method mainly improves the rule application stage. As we said in line229, we believe the more accurate confidence given by their model can boost the performance of INFER.\\n\\nPlease feel free to reach out if you have any questions or require further clarification. If you find our response satisfactory, we hope you will consider this a valid reason to consider raising your rating.\"}", "{\"comment\": \"Thank you for getting back to me. My concerns have been largely addressed, and I will raise my score\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Reply to Reviewer qjZA\", \"comment\": \"Thank you for your insightful feedbacks on our paper. We appreciate your crucial suggestions. The followings are our replies to your questions and summarized weaknesses:\\n\\nReply to W#1:\\n\\nWe have two main motivations. Firstly, traditional rule-based models, when applying rules through path matching, do not consider the validity of historical facts and treat recent facts and ancient facts equally (lines 82-84 in our paper). Secondly, traditional rule-based models are greatly affected by missing information, leading to inference bottlenecks. To address the first issue, we propose a temporal validity function that considers the time span and frequency of historical facts, extending the truth value of facts from binary to continuous and modeling fact validity. The main experiment and ablation experiment verify this motivation. To address the second issue, we introduce a static KG embedding model, which captures structural information through pre-training. We then propose the entire INFER paradigm, which combines embedding information with improved rule application by constructing and updating the Temporal Weight Matrix and proposing Projection operator. The resulting neural-symbolic method gains the ability to reason about missing information and its performance is less affected under incomplete settings compared to traditional rule-based models (Results in Table 2). Meanwhile, the efficiency of applying rules is greatly enhanced through performing matrices operations instead of path matching. More importantly, our model provides a new paradigm to combine embedding-based methods and symbolic-based methods paving the way for future research.\\n\\nAs we mentioned in section 4.3.3, when answering query $(s,r,?,t)$, we first traverse the available rule\\u2019s body obtaining the relations $rl_1,rl_2$.... At the starting point we select the $s$-th row of $M_{rl_1}$ which records the truth values of each entity serving as object entity in facts with subject entity $s$ and relation $rl_1$, $(s,rl_1,e_i);e_i\\u2208E$. If the rule length is 1, then this vector denotes the scores of all entities given by this rule. If the rule length is more than 1, as mentioned in the paper we transpose the row vector and repeat it along the row direction for $|E|$ times which results in a matrix of $|E|\\u00d7|E|$. Then we calculate the Hadamard product of the obtained matrix and the corresponding temporal weights matrix, which is the results of the **AND** of facts along each possible paths. Finally, we take the maximal value of each column, which is to select the biggest truth value of a path starting from s and ending at each entity $e_i$. This process is tractable and can be interpreted as human-readable results. For example by performing argmax operation, we can show which specific path leads to the maximal score of $e_i$. As we repeat the above procedure, the final results is the vector records the maximal truth values of paths leading to each entity.\\n\\nWe are sorry about the confusing presentation. In equation 5, **$Ans_i$** is a row vector which denotes the scores of all candidates at the $i$-th hop of a rule and n is the length of the rule. As for the Ans in Algorithm 1, it is also a row vector representing the scores of all candidates at each step.\\n\\nReply to W#3:\\n\\nThanks for your advice. Theoretically, the spatial complexity of INFER mainly counts on the temporal weight matrices: O(|E| * |E| *|R|). Although it seems costly, TKGs in real world are really sparse and we can use sparse matrix to store them. Also, as mentioned in line 259, we set a threshold to filter out some really small values so that the matrices can be stored on a single GPU. Here\\u2019s some statistics to illustrate the sparsity of TKGs:\\n\\nFor ICEWS14, the facts it contains take up only 0.00043% of the whole matrices, which means the rest entries in the Temporal Weight Matrices are zero. The proportion of ICEWS18 is 0.00020%, 0.00060% for ICEWS0515.\\n\\nAs for the time complexity, if the length of the rule is 1, then we just select a specific row of the corresponding Temporal Weight Matrix $M_r$. Thus, the time complexity is $O(1)$.\\n\\nIntuitively, when the rule is longer than 1, the time complexity of the inference procedure of INFER is $O(|E|^2)$ due to the Hadamard Product, which seems high. However, due to the sparse property of TKGs, in fact there are many zero values in $Ans$ (the vector stores intermediate candidates list), thus we actually implement it in a more efficient way through only selecting the non-zero values in Ans and calculate the product of the values and their corresponding rows in $M_r$. In this way, the time complexity is $O(|E| * len(Ans>0))$.\"}", "{\"summary\": \"This paper introduces INFER, a neural-symbolic model designed for Temporal Knowledge Graph (TKG) extrapolation reasoning. Traditional rule-based methods for TKGs, though interpretable, struggle with temporal reasoning as they treat facts as binary and ignore temporal frequency and validity. NFER addresses these issues through a Temporal Validity Function, which enables continuous truth values and models the frequency and validity of historical facts for better temporal adaptation. Additionally, INFER incorporates Temporal Weight Matrices with a pre-trained static KG embedding model to improve inference quality. A rule projection module enhances computational efficiency by leveraging GPU-optimized matrix operations, allowing INFER to scale effectively and integrate with embedding-based approaches. Experiments show that INFER achieves state-of-the-art results on several datasets, demonstrating enhanced inference capabilities over existing models, particularly in sparse TKG settings.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Temporal Knowledge Graph (TKG) reasoning is an important research topic.\\n 2. The proposed method shows better results compared to existing rule-based methods.\", \"weaknesses\": \"1. The motivation behind the method design is unclear, and the description lacks clarity. For example, in Section 4.3.3, the rationale for the rule projection strategy is not well-explained, and the meanings of terms like \\u201cAns_i\\u201d and \\u201cAns\\u201d are not clarified.\\n2. Limited novelty. The techniques used in the proposed method do not introduce significant innovations.\\n3. The temporal and spatial complexity of the inference process appears high, which might impact practical applicability. It would be beneficial to provide an analysis of the computational complexity.\\n4. Some experimental details are insufficiently explained. For example, in Line 426, the phrase \\u201ctraditional binary truth values for historical facts\\u201d needs further clarification. It is unclear what specific criteria or methods are used to assign these binary truth values to historical facts.\\n5. The dataset coverage is insufficient. The paper only uses the ICEWS dataset, which belongs to a specific category of TKG data. It would be valuable to include additional datasets like WIKI or GDELT to demonstrate the method\\u2019s generalizability across different data types.\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"To solve the problem that rule-based methods in the field of temporal knowledge graph inference have insufficient reasoning ability when graph facts are missing, and the fact state is simply considered as binary, ignoring the validity and frequency of historical facts, this paper proposes the neural symbol model of INFER, which quantifies fact credibility in time dimension by introducing a time validity function. At the same time, the time weight matrix is introduced so that the model can infer the missing facts and deal with the incompleteness of the map. And to improve the efficiency of rule reasoning, a rule projection module is proposed, which uses GPU-based matrix operation instead of traditional path matching.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"(1) This paper introduces effective means such as the time validity function to quantify fact credibility in the time dimension and proves its effectiveness through experiments, which provides ideas for how to improve the reasoning ability of rule-based methods in dealing with incomplete graphs.\\n(2) The average performance of the experiment is good on multiple datasets, and the self-made TKG data with sparse facts is obtained. The experimental results show that the INFER model is significantly better than other methods when the facts are sparse.\\n(3) The chart is clear, the paper is completed with a high degree, and it is easy to interpret.\", \"weaknesses\": \"(1) The Design of the time validity function: The time validity function proposed in this paper calculates the time weight of historical facts based on the time interval and frequency of fact occurrence. Although the above two terms are considered at the same time, the function form is relatively fixed and more dependent on experience. Adapting the attenuation rate using data-driven methods may enhance the model's adaptability.\\n(2) The performance of the model on the ICEWS05-15 dataset does not exceed that of TECHS, and there is no detailed analysis of the results in this paper.\\n(3) The rule projection module used by INFER loses the ability to directly model the sequence of facts to a certain extent, which may affect the accuracy in scenarios requiring strict time order or multi-jump reasoning. Additional experiments are needed for evaluation, especially for long rule samples with variable constraints. \\n(4) INFER introduces neural network embedding and complex matrix operations, and although the authors show entity scores when the rules are applied, it still damages the interpretability of the traditional rule model to some extent.\", \"questions\": \"(1) Is the use of the square root form of the time decay term in the time validity function based on the conclusions obtained from experiments?\\n(2) What is the additional overhead if you replace the static embedded model with a time-embedded model? And What's the performance boost?\\n(3) When calculating the rule confidence, the INFER model adopts the rule learning algorithm in TRRules. The rule base retains only cyclic rules and filters out acyclic rules. TR-Rules uses acyclic rules and proves its effectiveness. What is the reason for filtering acyclic rules here? Is there any experimental support?\\n(4) Does the static embedding model provide the same level of confidence for the same fact at different time points?\\n(5) How does INFER's rule projection module perform when dealing with rules of different lengths?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Looking forward to your reply\", \"comment\": \"Dear Reviewer cyuQ,\\n\\nI hope this message finds you well. We are grateful for your recognition of work and sincerely appreciate your valuable and comments. We eagerly anticipate receiving any additional feedback you may have for the clarification we've provided. We totally understand that your time is valuable. Thanks again for your valuable time and efforts and we are looking forward to your response.\"}", "{\"summary\": \"The paper introduces INFER, a neural-symbolic approach to temporal knowledge graph extrapolation. INFER uses a Temporal Validity Function that captures how frequently facts occur, as well as their validity over time using continuous values. INFER uses pre-trained static knowledge graph embeddings to construct Temporal Weight Matrices. A rule projection module that reformulates rule application as matrix operations.\\nThe paper evaluates INFER's effectiveness using three ICEWS datasets. When tested on modified sparse temporal knowledge graph datasets, INFER shows promising inference capabilities. These results highlight that INFER's combination of continuous temporal validity scoring and GPU-optimized rule application offer useful techniques for temporal knowledge graph reasoning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper introduces a new scoring mechanism for temporal rule validity. Alternate techniques in the literature appear to be more complicated.\\n\\n2. The paper evaluates differentiable rule-based inference systems across several ICEWS datasets, providing a demonstration of these systems on temporal knowledge graphs that represent time in event \\\"timestamp\\\" form.\", \"weaknesses\": \"1. The author's claimed novelty rests on acceleration of rule-based processing using matrix operations on a GPU. This is common in differentiable rule-learning systems, apparently first introduced as TensorLog, with associated inductive learning system Neural-LP, see e.g. [1].\\n\\n2. The paper claims: \\\"Experimental results show that INFER achieves state-of-the-art performance on three datasets and significantly outperforms existing rule-based models on our modified, more sparse TKG datasets, which demonstrates the superiority of our model in inference ability.\\\" The authors should consider that both embedding based and rule based systems can perform quite well relative to the methods they compare against. For instance, consider the following comparison with TimePlex [2]:\\n\\n| | ICEWS14 | ICEWS14 | ICEWS14 | ICEWS05-15 | ICEWS05-15 | ICEWS05-15 |\\n|--------|---------|-----|---------|----------|---------|-----|\\n| Method | MRR | HITS@1 | HITS@10 | MRR | HITS@1 | HITS@10 |\\n| TimePlex | **60.40** | **51.50** | **77.11** | **63.99** | **54.51** | **81.81** |\\n| INFER | 44.09 | 34.52 | 62.14 | 48.27 | 37.61 | 68.52 | \\n\\nThe table below illustrates the methods similar to those compared in this paper evaluated on wikidata and yago data sub-sets. The table also includes a rule-based method (TILP [3]) that is demonstrated to perform on par with Timeplex, illustrating that the performance gap between TimePlex and INFER show above may not be limited to embedding based methods. \\n\\n| | WIKIDATA12k | WIKIDATA12k | WIKIDATA12k | YAGO11k | YAGO11k | YAGO11k |\\n|--------|---------|-----|---------|----------|---------|-----|\\n| Method | MRR | HITS@1 | HITS@10 | MRR | HITS@1 | HITS@10 |\\n| TLogic | 0.2536 | 0.1754 | 0.4424 | 0.1545 | 0.1180 | 0.2309 |\\n| ComplEx | 0.2482 | 0.1430 | 0.4890 | 0.1814 | 0.1146 | 0.3111 |\\n| TA-ComplEx | 0.2278 | 0.1269 | 0.4600 | 0.1524 | 0.0936 | 0.2626 |\\n| DE-SimplE | 0.2529 | 0.1468 | 0.4905 | 0.1512 | 0.0875 | 0.2674 |\\n| TimePlex | **0.3335** | 0.2278 | **0.5320** | 0.2364 | **0.1692** | 0.3671 |\\n| TILP | 0.3328 | **0.2342** | 0.5289 | **0.2411** | 0.1667 | **0.4149** |\\n\\n\\n[1] @inproceedings{10.5555/3294771.3294992,\\nauthor = {Yang, Fan and Yang, Zhilin and Cohen, William W.},\\ntitle = {Differentiable learning of logical rules for knowledge base reasoning},\\nyear = {2017},\\nisbn = {9781510860964},\\npublisher = {Curran Associates Inc.},\\naddress = {Red Hook, NY, USA},\\nabstract = {We study the problem of learning probabilistic first-order logical rules for knowledge base reasoning. This learning problem is difficult because it requires learning the parameters in a continuous space as well as the structure in a discrete space. We propose a framework, Neural Logic Programming, that combines the parameter and structure learning of first-order logical rules in an end-to-end differentiable model. This approach is inspired by a recently-developed differentiable logic called TensorLog [5], where inference tasks can be compiled into sequences of differentiable operations. We design a neural controller system that learns to compose these operations. Empirically, our method outperforms prior work on multiple knowledge base benchmark datasets, including Freebase and WikiMovies.},\\nbooktitle = {Proceedings of the 31st International Conference on Neural Information Processing Systems},\\npages = {2316\\u20132325},\\nnumpages = {10},\\nlocation = {Long Beach, California, USA},\\nseries = {NIPS'17}\\n}\\n\\n@inproceedings{jain-etal-2020-temporal,\\n title = \\\"{T}emporal {K}nowledge {B}ase {C}ompletion: {N}ew {A}lgorithms and {E}valuation {P}rotocols\\\",\\n author = \\\"Jain, Prachi and\\n Rathi, Sushant and\\n {Mausam} and\\n Chakrabarti, Soumen\\\",\\n editor = \\\"Webber, Bonnie and\\n Cohn, Trevor and\\n He, Yulan and\\n Liu, Yang\\\",\\n booktitle = \\\"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)\\\",\\n month = nov,\\n year = \\\"2020\\\",\\n address = \\\"Online\\\",\\n publisher = \\\"Association for Computational Linguistics\\\",\\n url = \\\"https://aclanthology.org/2020.emnlp-main.305\\\",\\n doi = \\\"10.18653/v1/2020.emnlp-main.305\\\",\\n pages = \\\"3733--3747\\\",\\n abstract = \\\"Research on temporal knowledge bases, which associate a relational fact (s,r,o) with a validity time period (or time instant), is in its early days. Our work considers predicting missing entities (link prediction) and missing time intervals (time prediction) as joint Temporal Knowledge Base Completion (TKBC) tasks, and presents TIMEPLEX, a novel TKBC method, in which entities, relations and, time are all embedded in a uniform, compatible space. TIMEPLEX exploits the recurrent nature of some facts/events and temporal interactions between pairs of relations, yielding state-of-the-art results on both prediction tasks. We also find that existing TKBC models heavily overestimate link prediction performance due to imperfect evaluation mechanisms. In response, we propose improved TKBC evaluation protocols for both link and time prediction tasks, dealing with subtle issues that arise from the partial overlap of time intervals in gold instances and system predictions.\\\",\\n}\\n\\n[3] @inproceedings{xiongtilp,\\n title={TILP: Differentiable Learning of Temporal Logical Rules on Knowledge Graphs},\\n author={Xiong, Siheng and Yang, Yuan and Fekri, Faramarz and Kerce, James Clayton},\\n booktitle={The Eleventh International Conference on Learning Representations}\\n}\", \"questions\": \"1. How does your acceleration method differ in nature from TensorLog and Neural-LP?\\n2. Many methods have developed from the Neural-LP approach. How does your method compare to those?\\n3. Why do you not compare to TimePlex as the state of the art method that has been demonstrated on these ICEWS datasets?\\n4. Can the gap in performance with TimePlex and rule-based sysetsms be explained in terms of the differences between timestamp and interval-time objectives of different temporal knowledge graph methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer EcyD #2\", \"comment\": \"Reply to Q#2:\\n\\nCurrently we have $|R|$ temporal weight matrices and the size of each of them is $|E| * |E|$. If we use a temporal embedding model, then for $|T|$ timestamps, we need to build $|R|$ matrices of $|E| *|E|$ for each of them, since each fact might have a different score at different timestamp. This is the overhead issue.\", \"however_we_choose_to_use_static_embedding_model_not_only_because_of_the_overhead_issue\": \"Temporal embedding models can only score facts at timestamps that they have seen during the training stage. However, under extrapolation settings, the timestamps in the training set do not overlap with the timestamps in the validation and test set. When we use the matched bodies appeared after the training set timestamps but before the query timestamps, the temporal model can not give a reasonable score to them. Nevertheless, static embedding models can learn the overall structural information and give fair scores based on historical facts for future usage.\\n\\nSecondly, as we mentioned in line 256-258, during rule application, unlike traditional rule-based models\\u2019 path matching way, INFER does not consider the specific timestamp of facts. Thus, we do not need a score of a facts at a specific timestamp. The structural information of the whole historical facts better serves as the dependence of probabilities of potential missing facts.\\n\\nReply to Q#3:\\n\\nWe have observed that the improvement brought by acyclic rules in TR-Rules is not very significant, and the relatively large number of such acyclic rules may result in greater overhead. (TR-Rules report that the number of acyclic rules used is far greater than the number of cyclic rules) Therefore, we have not designed to utilize acyclic rules so far. As mentioned in Appendix F, we will explore this aspect in our future work.\\n\\nReply to Q#4:\\n\\nYes. Since the pre-trained static embedding model scores each fact ignoring the timestamps, it gives same score to the same fact at different timestamps.\\n\\nReply to Q#5:\\n\\nPlease see the results given in the reply#1 to W#3.\\n\\nIf you still have any concerns, please feel free to comment. If you find our response satisfactory, we hope you will consider this a valid reason to consider raising your rating.\"}" ] }
EwYUgKr9Fc
Semantic Membership Inference Attack against Large Language Models
[ "Hamid Mozaffari", "Virendra Marathe" ]
Membership Inference Attacks (MIAs) determine whether a specific data point was included in the training set of a target model. In this paper, we introduce the Semantic Membership Inference Attack (SMIA), a novel approach that enhances MIA performance by leveraging the semantic content of inputs and their perturbations. SMIA trains a neural network to analyze the target model’s behavior on perturbed inputs, effectively capturing variations in output probability distributions between members and non-members. We conduct comprehensive evaluations on the Pythia and GPT-Neo model families using the Wikipedia and MIMIR datasets. Our results show that SMIA significantly outperforms existing MIAs; for instance, for Wikipedia, SMIA achieves an AUC-ROC of 67.39\% on Pythia-12B, compared to 58.90\% by the second-best attack.
[ "Membership Inference Attack", "Large Language Models" ]
Reject
https://openreview.net/pdf?id=EwYUgKr9Fc
https://openreview.net/forum?id=EwYUgKr9Fc
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zV3nljktH8", "yE7Q5yn41V", "wG0JFASTl4", "mqE4fOvNaZ", "mWTqqWus0C", "lTHwGVfRUS", "lPj7WDTU6V", "inOUre7XFQ", "dqC8D6FI7N", "cyMcgyRvP5", "aSzEE8YrHa", "VxRiDTbfTB", "Uu7fo9wHye", "RktZKLJHDG", "RPjtYLvRoh", "JU0ilg5HAX", "JO61ERbodx", "Gg9vQ8860E", "CFblmusyTS", "CCm1cxgDoO", "9TghMpN1Rh", "8OTjp4IjGh" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732612064014, 1732636813147, 1732642226967, 1732644497557, 1733209350407, 1737523686645, 1734606936008, 1731315204095, 1732635268902, 1732566793455, 1731074654435, 1732640924887, 1733247187106, 1732634485314, 1730681533464, 1732637238416, 1730668135366, 1732636843674, 1732636215441, 1732633820032, 1732633442462, 1732634749458 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5142/Reviewer_zLuE" ], [ "ICLR.cc/2025/Conference/Submission5142/Authors" ], [ "ICLR.cc/2025/Conference/Submission5142/Authors" ], [ "ICLR.cc/2025/Conference/Submission5142/Reviewer_zLuE" ], [ "ICLR.cc/2025/Conference/Submission5142/Reviewer_8LnL" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5142/Area_Chair_WKZu" ], [ "ICLR.cc/2025/Conference/Submission5142/Reviewer_8LnL" ], [ "ICLR.cc/2025/Conference/Submission5142/Authors" ], [ "ICLR.cc/2025/Conference/Submission5142/Reviewer_rYUv" ], [ "ICLR.cc/2025/Conference/Submission5142/Reviewer_31Mw" ], [ "ICLR.cc/2025/Conference/Submission5142/Reviewer_zLuE" ], [ "ICLR.cc/2025/Conference/Submission5142/Authors" ], [ "ICLR.cc/2025/Conference/Submission5142/Authors" ], [ "ICLR.cc/2025/Conference/Submission5142/Reviewer_rYUv" ], [ "ICLR.cc/2025/Conference/Submission5142/Authors" ], [ "ICLR.cc/2025/Conference/Submission5142/Reviewer_zLuE" ], [ "ICLR.cc/2025/Conference/Submission5142/Authors" ], [ "ICLR.cc/2025/Conference/Submission5142/Authors" ], [ "ICLR.cc/2025/Conference/Submission5142/Authors" ], [ "ICLR.cc/2025/Conference/Submission5142/Authors" ], [ "ICLR.cc/2025/Conference/Submission5142/Authors" ] ], "structured_content_str": [ "{\"title\": \"Update\", \"comment\": \"~~After reading the other reviewers' comments and due to no response from the authors, I keep my verdict and recommend rejecting this paper.~~\\nThe authors have now responded; see below.\"}", "{\"title\": \"Rebuttal to Reviewer zLuE (part 2)\", \"comment\": \"**(w5)- I found two things in Figure 2 slightly confusing: i) the axis label, and ii) that all neighbors are larger on one axis; I understood that the similarity is somewhat independent of the parameter space? ...**\\n\\nIn this figure, the y-axis represents the loss values assigned by the model to members, non-members, and their neighbors, illustrating the model's behavior. The x-axis is a compressed representation of the semantic differences between data points, capturing how semantically similar or different the inputs are. We acknowledge that neighbors can indeed have varying positions on both axes; sometimes, a neighbor generated by word replacement may result in higher or lower loss values and semantic differences. To address your concerns, we have revised the figure by moving $x_1^m$ further from $x^n$ to better depict these variations and enhance clarity. We believe this figure is informative as it visually demonstrates the two key features that SMIA leverages in a simplified scenario. Additionally, we have fixed the typo in the caption.\\n\\n----\\n\\n**(q1)- Does the replacement actually happen in terms of words, or should it be tokens?**\\n\\nIn our method, the replacements are performed at the word level rather than at the token level. Replacing at the token level can lead to inconsistent or nonsensical content because tokens may represent subword units or fragments of words. Replacing only part of such a tokenized word would result in meaningless or grammatically incorrect text. Therefore, following previous works [Mattern et al. (2023) and Mitchell et al. (2023)], we replace entire words with other words or phrases generated by a T5 model. This approach ensures that the modified text remains coherent and semantically meaningful, which is crucial for maintaining the integrity of the data and the validity of our semantic analysis.\\n\\n---\\n\\n**(q2)- For the neighbor generation, are the resulting neighbors all unique, or could it be that some neighbours are duplicates/equal to the original sample?**\\n\\nIn our approach, the generated neighbors are generally unique and differ from the original samples. While it is theoretically possible for the word replacement process to select the same word (this is highly unlikely). We use a T5 model to perform word replacements, which typically generates semantically similar but different words, leading to unique neighbors. For instance, as shown in Figure 5(b), we provide an example where a neighbor generated for a Wikipedia sample includes a replaced word that differs from the original, creating a distinct and meaningful variation for our analysis.\\n\\n----\\n\\n**(q3)- Were T5 or the Cohere Embedding V3 model trained on some of the evaluation datasets? In particular, could it be that one of those models was only trained on the members for WC but not the non-members?**\\n\\nWe are not aware of the specific datasets these models were trained on, as their training data are not publicly disclosed in detail. However, we utilize these models as general-purpose tools for word replacement and embedding generation, assuming they provide broad semantic capabilities rather than being tailored to specific datasets. To address the concern that these models might have been trained predominantly on the member data and not the non-members\\u2014potentially influencing our results\\u2014we conducted additional experiments using a different embedding model, E5-mistral-7b-instruct, as presented in Section C.7. The consistent performance of SMIA with this alternative embedding model demonstrates the robustness of our method irrespective of the specific embedding model used. This suggests that our findings are not significantly affected by the training data of the embedding models, and SMIA remains effective even when the embeddings are generated from models with different training histories.\\n\\n----\\n\\n**(q4)- Why are the MIMIR splits constrained to only 1000 samples, especially before splitting?**\\n\\nWe utilized the MIMIR dataset exactly as it is provided, without imposing any additional constraints on the number of samples. Each subsplit in the MIMIR dataset consists of 1,000 member samples and 1,000 non-member samples, as specified in its documentation on HuggingFace. We did not filter out or exclude any samples in our experiments; instead, we used all the data available in each subsplit.\\n\\n----\"}", "{\"title\": \"Response to reviewer zLuE r\", \"comment\": \"Thank you for your thoughtful feedback and for acknowledging our efforts in addressing your previous concerns. We appreciate your time and constructive criticism, which helps us improve our work.\\n\\n\\nWe understand your concern about including the WC dataset in our evaluations. Our intention was to provide a comprehensive analysis by including both challenging datasets like WT and the four splits of MIMIR\\u2014which are our primary focus for assessing true membership inference performance\\u2014and datasets like WC to illustrate the behavior of MIAs in scenarios with distributional shifts.\\n\\nIn the main body of the paper, we have emphasized that WT and MIMIR are our main datasets for evaluating SMIA's effectiveness in realistic settings. Including the WC dataset was meant to offer additional insights into how MIAs perform when there is a distributional difference between members and non-members. We believe that presenting results on the WC dataset provides readers with valuable information about the performance of MIAs in such contexts.\\n\\nIn Tables 2 and 4, we provide the performance of all baseline attacks on the WC dataset to demonstrate that even in a dataset with distributional shifts, SMIA outperforms other methods. For example, on WC, SMIA achieves a TPR@1% FPR of 36.4%, while the LOSS attack achieves 6.9% and MiNK++ achieves 18%. This comparison highlights SMIA's effectiveness relative to existing baselines, even in different scenarios.\"}", "{\"comment\": \"I thank the authors for their prompt response. However, it does not change the aforementioned concerns and I will have to keep my score.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your detailed and thoughtful rebuttal. I appreciate the care you've taken to address each concern systematically. Let me share my thoughts on your responses:\\n\\n**Regarding w1 (WC-based assessment):**. \\nI appreciate your acknowledgment of the potential issues with WC-based evaluation and your clarification about the primary focus on WT datasets. However, I remain concerned that the paper's current structure and results presentation might still lead readers to draw conclusions from the WC results. I suggest:\\n- Moving the WC results entirely to an appendix\\n- Adding explicit warnings about interpretation of WC results in all relevant figure/table captions\\n- Strengthening the emphasis on WT as the primary evaluation metric in the abstract and introduction\\n- Make the nomenclature for WT and WC absolutely clear. It should not be hard to find their meaning.\\n\\n**Regarding w2 (Testing across Pile splits):**. \\nI acknowledge your explanation about the fundamental difference between dataset-level and sample-level inference. My comment was in regards to Fig 3 which you point to later. I am not sure why this should be computationally expensive. Could you take a 1000 example subset of train and test to do this?\\n\\n**Regarding w3 (Blind baseline comparison):** \\nI commend you on adding Section C.6 with the blind baseline comparisons. However, I have some important recommendations:\\n- It is important to move these results to the main paper rather than the appendix. This is a critical baseline. The independent table in appendix offers no comparative discussion to the average reader.\\n- Add a brief discussion of why your method succeeds where blind attacks fail\\n- Include error analysis for cases where your method outperforms the baselines\\n\\n\\n**Regarding the use of MIMIR dataset**: \\nPlease check the line below Table 2 in this paper. The authors explicitly mention: **We clarify that this step is not a\\nsuggestion for researchers to alter their benchmarks**\", \"in_conclusion\": \"This method might very well be promising. But the current presentation, the focus on irrelevant and unsound benchmarks leads to the authors hurting their own paper and its promise. This work needs to be re-written with a focus on the right practices for membership inference research, with a focus on the right train-test splits. I would strongly encourage the authors to extend this to multiple PILE subsets following the recommendations in [2] ignore the use of MIMIR dataset as suggested by [1], and include blind baselines in the main paper as suggested in [3]. On top of this, WC should only see a place in Appendix, if at all.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This paper introduces a novel approach to membership inference attacks against Large Language Models, centered around how these models exhibit distinct behavioral patterns when processing semantically similar variants of their training data versus unseen data. The authors develop a method that analyzes model responses to semantic neighbors of input texts, demonstrating improvements over existing approaches.\\n\\nThe technical approach combines neighbor generation using masked language models, semantic embedding analysis, and neural network classification. This allows the method to detect both exact matches and semantically similar content, representing an advancement in the field. The experimental evaluation examines performance across different model sizes, architectures, and datasets, while considering dataset distribution effects through different non-member datasets. The analysis of modified text scenarios through word additions, deletions, and duplications provides valuable practical insights into the method's robustness.\\n\\nHowever, the submission has several significant weaknesses. A fundamental concern is that the evaluation datasets might be measuring distribution shifts rather than true membership inference performance. Recent work demonstrates that simple baselines can achieve high success on temporally shifted articles, casting doubt on the reported improvements. The paper's heavy reliance on a single metric is problematic, and the limited reporting of other metrics makes it difficult to assess real-world effectiveness. Additionally, while the method involves multiple complex components, there's insufficient investigation of their necessity through ablation studies.\\n\\nThe paper omits several important comparisons that would strengthen its evaluation, including performance analysis across train-test splits, comparison against baseline approaches, and exploration of alternative embedding approaches. The selection of key parameters lacks thorough justification and analysis of trade-offs between computational cost and performance gain.\\n\\nWhile the technical approach is novel and well-presented, the fundamental concerns about evaluation methodology cast serious doubt on the paper's main claims. The possibility that the method is primarily detecting distribution shift rather than true membership undermines its purported advances. To strengthen this work, the authors should expand the evaluation to include comprehensive comparisons against baselines, incorporate additional metrics, conduct thorough ablation studies, and provide clearer justification for design choices and parameter selection. These improvements would help establish the method as a meaningful advancement in membership inference attacks against language models.\", \"additional_comments_on_reviewer_discussion\": \"The paper review process revealed significant concerns about the SMIA methodology and its evaluation approach. The primary issue centered on the use of temporally split Wikipedia data for evaluation, as recent research has shown that simple baselines can achieve high success rates on such splits, suggesting potential confusion between distribution shifts and actual membership information. The choice of evaluation metrics also drew criticism, with arguments that the AUC-ROC metrics alone were insufficient for properly assessing membership inference attacks. Though some additional metric results were provided in response, they were not comprehensive across all experimental settings. Critics also pointed out the lack of thorough ablation studies and clear justification for various architectural and hyperparameter choices, with only partial responses addressing these concerns. Practical implementation concerns were raised regarding the dependency on specific embedding models and associated privacy implications, along with questions about scalability for larger datasets and more complex architectures. These issues remained largely unaddressed. The final rejection decision was primarily influenced by the fundamental concerns about the evaluation methodology, as the potential confusion between distribution shift and membership inference significantly weakened the paper's core arguments. While the technical limitations could have been addressed through revision, the fundamental methodological issues would require substantial changes to the paper's approach and experimental design.\"}", "{\"summary\": \"This paper studies the problem of membership inference in large language models by proposing \\\"semantic MIAs\\\". The key idea is to use various perturbations to the input text, then analyze the behavior of a model on these points, train an auxiliary model on these behaviors, and finally use that model to predict membership. The authors show that SMIAs outperform prior MIAs on different benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper does a good job at explaining their methodology, which considers the fact that learning a classifier on top of LLM behaviors can allow learning membership signals well.\\n2. The work demonstrates strong gains over past MIAs across multiple benchmark datasets.\", \"weaknesses\": \"There are various works at this point in literature which have argued that \\\"WC\\\" based assessment of MIAs when data is split across a cut-off date, is not sound. This work bases many of their gains on that setting.\\n\\n[1] Do Membership Inference Attacks Work on Large Language Models? https://arxiv.org/abs/2402.07841. \\n[2] LLM Dataset Inference: Did you train on my dataset? https://arxiv.org/abs/2406.06443. \\n[3] Blind Baselines Beat Membership Inference Attacks for Foundation Models. https://arxiv.org/abs/2406.16201.\", \"key_weaknesses\": \"1. The method should be tested across all 20 train-test splits of Pile. Paper [2] on LLM Dataset Inference particularly shows how most MIAs perform well only on a few datasets\\n2. The blind baseline of n-gram is not considered in this work (paper [3]). This is an important comparison to understand if the gains in this work are meaningful.\\n3. Given that the goal of MI in LLMs is hard, I would love to see experiments on dataset inference.\\n4. The idea of learning an auxiliary classifier on top of the membership signal from neighbours seems to have been explored in [2]\", \"questions\": [\"How does the member data of the embedding model impact the performance of the SMIA\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal to Reviewer rYUv\", \"comment\": \"We thank the reviewer for their feedback. We provide responses for the weaknesses and questions raised below:\\n\\n----\\n\\n**(w1 & q1)- Could the authors provide some examples of grey-box models?**\\n\\nIn our work, the grey-box scenario refers to a threat model where the adversary has access only to the loss values returned by the target model for texts of their choosing. This setting is practical for models accessed via APIs that provide limited feedback, such as loss or perplexity scores, without exposing internal parameters or detailed outputs like per-token logits. Examples of grey-box models include cloud-based language services and commercial language models that offer summary metrics but restrict deeper access to prevent reverse engineering or misuse. Our attack, SMIA, is designed to operate effectively under these conditions, relying solely on loss values. This contrasts with prior works that require white-box access (e.g., [AA, BB]) or exact per-token logits (e.g., MinK and MinK++). In all our experiments, we simulate this grey-box scenario by using only the loss values from the target models, demonstrating the practicality and applicability of SMIA in real-world settings.\\n\\n[AA] : Milad Nasr, Reza Shokri, and Amir Houmansadr. \\\"Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning\\\". In 2019 IEEE symposium on security and privacy (SP)\\n\\n[BB] Suri, Anshuman, Xiao Zhang, and David Evans. \\\"Do Parameters Reveal More than Loss for Membership Inference?.\\\" arXiv preprint arXiv:2406.11544 (2024).\\n\\n-----\\n\\n**(w2 & q2)- How will the performance of SMIA change if changing the embedding model or classification models?**\\n\\nWe have added Section C.7 to present additional ablation experiments that demonstrate the robustness of SMIA under variations in both embedding models and classifier network sizes. Specifically, Table 9 includes results using the E5-mistral-7b-instruct embedding model alongside the original Cohere v3 model, showing consistent performance across different embedding models. Additionally, Table 10 explores the impact of varying classifier network sizes by comparing the original network with a smaller linear classifier and a larger network with additional fully connected layers. These results collectively highlight SMIA\\u2019s adaptability to different model configurations.\\n\\n-----\\n\\n**(w3 & q3)- What is the general behavior of loss from members? For example, will the loss from members change less than non-members considering neighbours?**\\n\\nIn MIAs, it is commonly observed that target models assign lower loss values to members due to overfitting, resulting in a distinct loss trend compared to non-members. We provide these baseline results as LOSS attack in our experiments. However, relying solely on loss values can be problematic because certain non-member inputs\\u2014such as short or repetitive texts\\u2014may also receive low loss values, leading to false positives. Our proposed method, SMIA, addresses this issue by incorporating semantic embeddings of the text. By leveraging the semantic information, SMIA captures the contextual relationships between data points, allowing us to more effectively distinguish between members and non-members. Specifically, we analyze how the loss values change concerning the semantic neighbors of each input. Members tend to have loss patterns that are not only lower but also more consistent within their semantic neighborhoods. This approach enhances the attack's precision by considering both the loss and the semantic context. \\n\\n----\"}", "{\"comment\": \"Consider the weaknesses raised by other reviews and the authors' inactivity. I would like to adjust my score to 3.\"}", "{\"summary\": \"This paper introduces SMIA (Semantic Membership Inference Attack), an innovative approach to membership inference attacks against LLMs that leverages semantic analysis. The key insight is that LLMs exhibit distinct behavioral patterns when processing semantically similar variants of their training data versus unseen data. SMIA capitalizes on this by generating semantic neighbours for input texts, analyzing how the target model responds to these variations, and training a neural network to detect membership based on these response patterns. The authors evaluate SMIA extensively on the Pythia and GPT-Neo model families using Wikipedia and MIMIR datasets, demonstrating significant improvements over existing approaches - notably achieving an AUC-ROC of 67.39% on Pythia-12B compared to previous best of 58.90%.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The authors present a well-designed pipeline that combines neighbour generation using masked language models, semantic embedding analysis via the Cohere model, and neural network classification. This comprehensive approach allows SMIA to detect both exact matches and semantically similar content, representing a significant advancement over existing methods.\\nThe experimental evaluation is particularly thorough, examining performance across different model sizes, architectures, and datasets. The authors carefully consider the impact of dataset distribution by using two different non-member datasets - one from the same distribution (Wikipedia Test) and another from a different time period (Wikipedia Cutoff). This reveals important insights about how data distribution affects attack performance. Additionally, the analysis of modified text scenarios (through word additions, deletions, and duplications) provides practical insights into the method's\", \"weaknesses\": \"1. The choice of key hyperparameters, particularly the use of 25 neighbours, lacks thorough clarification. While Table 7 shows performance improvements with increasing neighbour count, there's no clear analysis of the trade-off between computational cost and performance gain. The paper should examine the diminishing returns beyond 25 neighbours and justify why this specific number optimally balances effectiveness and efficiency.\\n2. A concerning weakness emerges in the method's inconsistent performance across different types of text modifications. As shown in Table 3, while SMIA achieves an AUC-ROC of 62.47% for word deletions on Pythia-12B (WT dataset), it drops to 55.13% and 54.19% for word duplications and additions respectively. This significant performance gap suggests an inherent bias in the method's ability to handle different types of text alterations. The authors don't adequately explain this asymmetry or propose potential solutions.\\n3. Aminor problem is the heavy reliance on the Cohere Embedding model, a third-party service, introduces both a potential point of failure and a privacy concern - users must share their data with an external service to generate embeddings. The authors don't explore alternative embedding approaches or analyze how the choice of embedding model impacts performance. Furthermore, the paper lacks a comprehensive analysis of the cost implications of using such commercial services at scale.\\n4. The method shows notably better performance with word deletions compared to additions or duplications, as evidenced in Table 3. For instance, with Pythia-12B on the Wikipedia Test dataset, SMIA achieves an AUC-ROC of 62.47% for word deletions but only 54.19% for additions. This behavioral asymmetry suggests fundamental limitations in how SMIA handles different types of semantic modifications, yet the paper offers limited analysis of why this occurs or how it might be addressed. This becomes particularly relevant when considering real-world scenarios where adversaries might deliberately modify texts in ways that exploit these weaknesses.\\n5. While authors demonstrate the attack's effectiveness, they provide minimal insight into how model owners might protect against such attacks. This omission is particularly notable given the paper's focus on privacy implications and its potential impact on real-world applications of LLMs. A more comprehensive treatment of potential countermeasures would significantly enhance the paper's practical value and provide important context for the security community.\", \"questions\": \"1. The scalability of SMIA deserves further exploration - how would the method perform with larger training datasets and more complex model architectures? The current evaluation uses 6,000 members and non-members each, which may not fully reflect real-world scenarios.\\n2. The choice of neural network architecture for the SMIA model itself appears somewhat arbitrary. Have the authors explored alternative architectures that might improve performance or efficiency? Additionally, how sensitive is the method to the quality and dimensionality of the semantic embeddings?\\n3. How might defenders mitigate against SMIA?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for their detailed response, updates, and addressing all my questions.\\n\\nWhile I appreciate the authors' efforts,\\nand the updates regarding weaknesses w3, w4, and w5 do convince me,\\nthe two largest weaknesses (w1 and w2) still persists.\\nI hence cannot recommend accepting this paper and keep my score.\\n\\n**Re weakness 1** (Evaluation datasets measure distribution shift, not MI):\\nI appreciate the \\\"blind baselines\\\".\\nHowever, the flawed \\\"WC\\\" baseline is *still included in the paper* and used to \\\"sell\\\" the method.\\nThe authors mention `Evaluations on WC are included for comprehensiveness but are not the primary basis for our conclusions.`,\\nbut a reader just skimming the paper and not carefully studying the appendix will miss this.\\nWhile including the flawed WC dataset in the original version can be seen as an accidental mistake,\\nleaving it in the paper *despite knowing its flaws* leaves a bad taste in my mouth.\\n\\n**Re weakness 2** (reliance of AUROC as the main metric):\\nWhile I appreciate the authors' adding [email protected]% FPR to Table 4, most problems persist.\\nThe issue is not just omitting TPR@low FPR, it is also focusing on AUROC as the main metric. Yet, the revised paper still seems to \\\"sell\\\" the method by only reporting AUROC in the main matter, does not provide TPR@low FPR values for most results, and hides TPR@low FPR in the appendix.\\nWhile I agree that other MIA papers (especially for LLMs) also make similar mistakes, this does not make it less of a mistake.\"}", "{\"comment\": \"Thank you for your thoughtful and detailed feedback. We greatly appreciate the time you've taken to help us improve our paper. While we are unable to update the draft (deadline has passed), we will incorporate your suggestions in the next version.\\n\\nWe understand your concerns about the potential misinterpretation of the WC results. To address this, we will move all WC results to the appendix to emphasize that our primary focus is on the WT datasets. We will also add clear warnings in all relevant figure and table captions about the limitations of interpreting WC results. \\n\\nRegarding testing across different Pile splits, we agree that this is important. However, our main obstacle is that we cannot access some of the Pile subsplits due to copyright regulations. Additionally, [2] applies 52 different membership inference attacks in stage one of their methodology, which is computationally intensive and poses a challenge for replication within our resource constraints. \\n\\nThank you for acknowledging our efforts in including blind baseline comparisons. We will move the results from Section C.6 in the appendix to the main body (Section 4.2) to highlight their importance. Our rationale for using MIMIR dataset was to establish a consistent benchmark, given the subjective selection of Pile splits in different studies. We chose a challenging setting with over 80% overlap in 13-grams to push the limits of our method, compared to other works that use settings with 20% overlap in 7-grams. \\n\\nWe are committed to enhancing our paper by adopting the right practices and addressing all the points you've raised. We kindly request that you consider these planned improvements when evaluating our submission.\"}", "{\"title\": \"Rebuttal to Reviewer 31Mw (part 1)\", \"comment\": \"We thank the reviewer for their feedback. We provide responses for the weaknesses and questions raised below:\\n\\n-----\\n\\n**(w1)- The choice of key hyperparameters, particularly the use of 25 neighbours ...**\\n\\nIn Section B of our paper, we provide a detailed cost estimation that encompasses generating neighbors, computing embeddings, and evaluating loss values for the target model. Our experiments, as shown in Table 7, indicate that performance improves with an increasing number of neighbors up to 25. Beyond this point, we observed diminishing returns in performance gains while computational costs continue to rise significantly. Specifically, utilizing more than 25 neighbors would necessitate larger training datasets to capitalize on the additional information, which leads to increased computational overhead without proportional benefits. Therefore, we selected 25 neighbors as it strikes an optimal balance between effectiveness and efficiency, offering substantial performance improvements without incurring excessive computational costs. \\n\\n-----\\n\\n**(w2 & w4)- A concerning weakness emerges in the method's inconsistent performance across different types of text modifications .... The method shows notably better performance with word deletions compared to additions or duplications, as evidenced in Table 3 ...**\\n\\nThis discrepancy stems from how these modifications affect the coherence and consistency of the input texts, which in turn influences the model's loss values. In the case of word duplication, we duplicate an exact word consecutively within the text. This repetition can introduce redundancy without significantly altering the semantic meaning, which may substantially impact the model's loss values. Consequently, the loss patterns between members and non-members become less distinguishable, affecting SMIA's ability to infer membership accurately.\\n\\nFor word additions, we currently insert only the first word generated by the T5 model into the original text. This approach can lead to incomplete or out-of-context additions, disrupting the text's coherence and causing inconsistent loss values when processed by the target model. Such inconsistencies make it challenging for SMIA to detect meaningful patterns for membership inference.\\n\\nIn contrast, word deletions often result in more coherent sentences, as the removal of a word may not drastically alter the overall meaning of the text. This maintains a consistent loss landscape, allowing SMIA to more effectively differentiate between members and non-members based on their loss and semantic relationships.\\n\\n-----\\n\\n**(w3)- A minor problem is the heavy reliance on the Cohere Embedding model, a third-party service, introduces both a potential point of failure and a privacy concern ...**\\n\\nIn response, we have added Section C.7 to our paper, where we present additional ablation experiments demonstrating the robustness of SMIA under different embedding models and classifier network sizes. Specifically, Table 9 includes results using the E5-mistral-7b-instruct embedding model alongside the original Cohere v3 model, showing consistent performance across different embeddings. This indicates that SMIA does not rely heavily on any specific third-party service, thereby mitigating potential points of failure and privacy concerns associated with data sharing. Furthermore, we have provided a comprehensive cost analysis in Section B, detailing the computational expenses of generating neighbors, computing embeddings, and evaluating loss values for the target model. This addresses the cost implications of using such services at scale and demonstrates the practicality of our approach. \\n\\n----\\n\\n**(w5 & q3)- How might defenders mitigate against SMIA?**\\n\\nWhile our paper primarily focuses on demonstrating the effectiveness of SMIA to highlight existing vulnerabilities in large language models, we acknowledge the importance of discussing potential defenses against such attacks; however, an in-depth exploration of countermeasures such as Differential Privacy (DP)\\u2014which is the main solution for defending against membership inference attacks\\u2014is beyond the scope of our current work; we consider this an important direction for future research and defer a comprehensive treatment of potential defenses to future studies. \\n\\n------\"}", "{\"summary\": \"The paper introduces the Semantic Membership Inference Attack (SMIA), a novel approach that conducts Membership Inference Attacks by leveraging the semantic content of inputs and their perturbations. SMIA trains a model to detect variations in model behavior between member data members and non-members by analyzing how output probability distributions change with input perturbations. Evaluated on two models and two datasets, SMIA outperforms existing MIA techniques, achieving higher AUC and TPR when FPR is low in detecting membership.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method in this paper shows a very good novelty.\\n\\n2. The proposed method has a good performance considering both AUC and TPR when FPR is low.\\n\\n3. The proposed method successfully identifies membership even when data undergoes slight modifications due to its design motivation.\", \"weaknesses\": \"1. Though the authors claim the proposed method is designed for grey-box models, there are no direct experiments about the real grey-box models.\\n\\n2. The experiments are not very comprehensive on the ablation part. For example, the authors might provide results if changing the embedding model or changing the classification models. \\n\\n3. There is no intuitive explanation of the proposed methods like what is the difference between the trends of loss from members and trends of loss from non-members.\", \"questions\": \"1. Could the authors provide some examples of grey-box models?\\n\\n2. How will the performance of SMIA change if changing the embedding model or classification models?\\n\\n3. What is the general behavior of loss from members? For example, will the loss from members change less than non-members considering neighbours?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Note\", \"comment\": \"We sincerely appreciate the time and effort you invested in reading our paper and providing feedback. We would like to inform you that we have updated our draft in response to the valuable feedback provided during the review process. The new additions are highlighted in blue in the revised draft.\", \"the_main_changes_are\": [\"**1. Section C.6 \\u2013 Evaluating Dataset Complexity of WT and WC against MIAs**: We have implemented two blind attacks from Das et al. (2024) to assess the complexity of the WT and WC datasets in the context of membership inference attacks. This evaluation demonstrates that the WT dataset presents a challenging and appropriate setting for our main evaluations of SMIA, while highlighting that the WC dataset is susceptible to distributional shifts that can be exploited by blind attacks. These findings reinforce the robustness of our approach and validate our choice of the WT dataset for primary evaluations.\", \"**2. Section C.7 \\u2013 Ablation Study on SMIA Performance with Different Embedding Models and Classifier Networks**: We have conducted additional ablation experiments to examine the robustness of SMIA under variations in embedding models and classifier network architectures. Specifically, we present results using the E5-mistral-7b-instruct embedding model alongside the original Cohere v3 model, showing consistent performance across different embeddings. Additionally, we explore the impact of varying the size of the classifier network by comparing the original architecture with both smaller and larger networks. These experiments underscore SMIA's adaptability and effectiveness across different configurations.\", \"We believe these additions address most of concerns raised during the review process and enhance the clarity and comprehensiveness of our work. We appreciate your time and consideration.\"]}", "{\"summary\": \"This paper proposes a new membership inference attack (MIA) for LLMs, called SMIA, with the goal to measure memorization. SMIA's intuition is that semantically similar \\u201cneighbors\\u201d of a sample contain membership signal. In the threat model, the adversary can query the target LLM to obtain losses/log probs, and has access to a set of known training members and non-members.\\n\\nConcretely, given a target sample, the attack works as follows:\\n\\n1. Generate \\\"neighbors\\\" of the target via repeated random masking and infilling (via T5 3B).\\n2. Calculate semantic embeddings of the target sample and all neighbors (via Cohere Embedding v3).\\n3. For every neighbor, calculate the difference to the target sample in terms of i) loss and ii) semantic embedding (full direction). Use the resulting 1 + 1024 values as the input to a moderately-sized MLP classifier that predicts a membership score.\\n4. Return \\\"member\\\" if the average membership score over all neighbors is larger than some threshold.\\n\\nThe paper evaluates SMIA for Pythia and GPT-NEO models on Wikipedia and MIMIR data. All results are reported in terms of AU-ROC; a few results also in terms of TPR @ FPR. For Wikipedia, the evaluation dataset uses Wikipedia text in the Pile training split as members, and one of two types of non-members: *WT* uses Wikipedia text from the Pile test split (before March 2020); *WC* uses Wikipedia pages published after August 2023 (not in the Pile training split). For MIMIR, the evaluation uses the highest n-gram overlap subsplit with various splits.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper introduces SMIA and its intuition very thoroughly. Even though there is no code, the attack's description should be sufficient to reimplement SMIA and reproduce most reported results. I also find it very interesting that directions in embedding space seem to carry non-trivial membership signal and think that the idea of considering not only exact verbatim samples for MI in LLMs is important. Lastly, although MI for LLMs remains a hard task, this paper manages to achieve small but consistent improvements over most existing methods in some settings.\", \"weaknesses\": \"**Evaluation datasets measure distribution shift, not MI**: Some evaluation datasets in this paper might be deeply flawed, in that they do not measure MI performance but just distribution shifts. For Wikipedia data, SMIA only yields a big improvement on the WC split, which uses temporally shifted Wikipedia articles as non-members. However, [Das et al., 2024](https://arxiv.org/abs/2406.16201) show that simple \\\"blind\\\" baselines that classify membership just based on the data (without any signal from the target model) already achieve very high MIA success.\\nHence, SMIA could potentially just be a good distribution shift detector and the evaluation misleading. This is particularly concerning since the membership classifier is a NN, which might be particularly prone to overfitting on spurious correlations in the evaluation data. One way to alleviate this concern could be a blind baseline that uses inputs independent of the target model (e.g., random or constant loss).\\nFor MIMIR, this is less of a concern. However, there SMIA only achieves significantly better MIA success for GitHub data and Wikipedia data for one model; existing attacks perform similarly or better on all other subsplits (in terms of TPR @ 2% FPR).\\n\\n**AU-ROC alone is insufficient to evaluate MIAs**: It has long been known that only reporting AU-ROC in the evaluation of MIAs is deeply flawed and misleading [(Carlini et al., 2021)](https://arxiv.org/abs/2112.03570). Yet, this paper uses only AU-ROC as its main metric, and only reports TPR@low FPR values for a very limited subset of results in App. C.1. In particular, the effects of deduplication (Table 6), varying number of neighbors (Table 7), and slight modifications (Table 3) are only reported in terms of AU-ROC, and can hence not be judged soundly. The paper should report TPR @ low FPR (e.g., 2% or 1%) together with full ROC curves as the main metric, and defer AU-ROC metrics to the appendix (or omit them entirely).\", \"nb\": \"Consequently, while I looked at all tables in the main matter, I can only judge SMIA's performance by the values on Tables 4 and 5 (App. C.1).\\n\\n**Complex approach without ablation**: SMIA is a complex approach that relies on many moving parts; yet, there is very little investigation about why every part is necessary. Hence, this paper requires a more thorough ablation study (e.g., what if the classifier only acts on losses/embedding directions, or what if the NN is replaced by a linear classifier?).\\n\\n**Other minor points/feedback**:\\n1. SMIA requires a known member and non-member subset from the training data distribution. This is a non-trivial assumption that requires some discussion.\\n2. The lowest reported FPR threshold is 2%. This is still relatively high; it would be interesting to also report the attack's performance at lower FPRs such as 1% and 0.1% if feasible.\\n3. I found two things in Figure 2 slightly confusing: i) the axis label, and ii) that all neighbors are larger on one axis; I understood that the similarity is somewhat independent of the parameter space? Also, e.g., $x_1^m$ seems to be further from $x^m$ than $x^n$. Right now, I think the figure does not carry much information and could be confusing, hence it might be dropped. Finally, there is a typo in the caption (`taregt`).\\n4. Sec. 5.1 mostly repeats the results displayed in Table 1. I think the space could be used better by dropping this repetition and instead show some of the interesting studies currently in App. C.\", \"questions\": \"1. L218: Does the replacement actually happen in terms of words, or should it be tokens?\\n2. For the neighbor generation, are the resulting neighbors all unique, or could it be that some neighbours are duplicates/equal to the original sample?\\n3. Were T5 or the Cohere Embedding V3 model trained on some of the evaluation datasets? In particular, could it be that one of those models was only trained on the members for WC but not the non-members?\\n4. L382--384: Why are the MIMIR splits constrained to only 1000 samples, especially before splitting?\\n5. L391: The intuition about \\\"Why SMIA Outperforms Other MIAs\\\" mentions that using a neural network is one of the key success factors. However, I think simply using a neural network does not imply a stronger attack; could the authors elaborate this a bit more?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"ebuttal to Reviewer zLuE (part 3)\", \"comment\": \"**(q5)- The intuition about \\\"Why SMIA Outperforms Other MIAs\\\" mentions that using a neural network is one of the key success factors. However, I think simply using a neural network does not imply a stronger attack; could the authors elaborate this a bit more?**\\n\\nWe agree that simply using a neural network does not inherently result in a stronger attack. In the context of SMIA, the key to its high performance lies in leveraging semantic embeddings of the input texts\\u2014making it the first MIA against LLMs to utilize input semantics. The neural network serves as an effective tool for integrating these high-dimensional embeddings with the model's loss values into a unified feature space. This allows the classifier to capture complex patterns and subtle differences between members and non-members that simpler models might miss. As demonstrated by Nasr et al. (2019), neural networks can extract informative features in membership inference attacks. Therefore, it is the combination of utilizing semantic embeddings and the neural network's capacity to model intricate relationships in the data that contributes to SMIA's superior performance, rather than the use of a neural network alone.\\n\\n----\"}", "{\"title\": \"Rebuttal to Reviewer zLuE (part 1)\", \"comment\": \"We thank the reviewer for their feedback. We provide responses for the weaknesses and questions raised below:\\n\\n---\\n\\n**(w1)-Evaluation datasets measure distribution shift, not MI** \\n\\nTo address the potential issue of our evaluation datasets measuring distribution shifts rather than true membership inference performance, we have added Section C.6 to our paper. In this section, we implement two blind attacks from Das et al. (2024): (a) greedy rare word selection and (b) bag-of-words classification, which operate without any signal from the target model. Our results, summarized in Table 13, show that these blind attacks achieve high AUC-ROC scores on the WC split (58.30% and 83.6%, respectively), indicating that WC is susceptible to distribution shift exploitation.\\n\\nAs we mention in Section 5.1, WC contains distribution-shifted data, and we discuss why different MIAs achieve good results on this dataset due to the distinct distributions of members and non-members. Our main evaluations focus on the WT dataset, which provides a realistic and challenging scenario where members and non-members come from closely aligned distributions. Evaluations on WC are included for comprehensiveness but are not the primary basis for our conclusions.\\n\\nOn the WT split, the same blind attacks perform no better than random guessing, with AUC-ROC scores around 52% and TPRs at low FPRs near zero. This stark contrast emphasizes the limitations of datasets like WC for evaluating MIAs, as their success is largely driven by distributional differences rather than true privacy leakage. Importantly, our proposed SMIA method maintains strong performance on the WT dataset (as shown in Tables 7 and 8), effectively detecting membership where blind attacks fail. This indicates that SMIA is not merely capturing distribution shifts but is robustly inferring membership information based on the model's learned representations, thereby addressing true privacy leakage concerns.\\n\\n-----\\n\\n**(w2)- AU-ROC alone is insufficient to evaluate MIAs**\\n\\nWe agree that relying solely on AU-ROC can be insufficient and potentially misleading for evaluating membership inference attacks. In our paper, we have provided TPR at low FPR values for our main results in Tables 4 and 5 and have now added TPR@1% FPR to Table 4 for a more comprehensive evaluation; while previous works like Min-K and Min-K++ did not report TPR for FPR lower than 5%\\u2014possibly due to the low absolute values\\u2014we recognize the value of including these metrics to facilitate a thorough comparison; from Table 4, it is evident that SMIA achieves the best results in most scenarios, especially at low FPR thresholds, demonstrating its effectiveness over existing methods. We will update other tables to include TPR@1%FPR values in the main paper to enhance transparency.\\n\\n----\\n\\n**(w3)- Complex approach without ablation**\\n\\nWe have added Section C.7 to present additional ablation experiments that demonstrate the robustness of SMIA under variations in both embedding models and classifier network sizes. Specifically, Table 10 includes results using the E5-mistral-7b-instruct embedding model alongside the original Cohere v3 model, showing consistent performance across different embedding models. Additionally, Table 11 explores the impact of varying classifier network sizes by comparing the original network with a smaller linear classifier and a larger network with additional fully connected layers. These results collectively highlight SMIA\\u2019s adaptability to different model configurations.\\n\\n-----\\n\\n**(w4)- SMIA requires a known member and non-member subset from the training data distribution. This is a non-trivial assumption that requires some discussion.**\\n\\nWhile SMIA does require access to known member and non-member subsets from the training data distribution, we believe this assumption is realistic in several practical scenarios. In privacy auditing or unlearning verification, for instance, auditors or red team members often have access to portions of both member and non-member data to evaluate a model's compliance with privacy standards. Even in general attack settings, adversaries can leverage knowledge about the model's training cutoff date to infer likely members and non-members\\u2014for example, assuming that data published before the cutoff (e.g., Wikipedia articles, arXiv papers, GitHub repositories) are included in the training set, while newer data are not. We discuss this in Section 3.2, illustrating how an adversary might collect such data for SMIA. Therefore, while the requirement is non-trivial, it reflects practical conditions under which membership inference attacks are relevant and actionable.\\n\\n----\"}", "{\"title\": \"Rebuttal to Reviewer 8LnL (part 2)\", \"comment\": \"**(w4)- The idea of learning an auxiliary classifier on top of the membership signal from neighbours seems to have been explored in [2]**\\n\\nThe cited paper [2] employs simple attacks such as n-gram statistical models and bag-of-words classifiers for membership inference. These methods rely on basic textual features and do not capture the deeper semantic relationships or the behavior of the target model. In above experiment (for showing that WT is a strong dataset), we demonstrate that these attacks are ineffective on our datasets, yielding results close to random guessing (AUC-ROC scores close to 50%). In contrast, our SMIA model leverages the semantic content of the input data and the behavior of the target model as input features for the auxiliary classifier. This allows us to capture more nuanced patterns that are indicative of membership, leading to a more effective attack. \\n\\nThe work most closely related to ours is actually [AA]. In Section 2 of our paper, we discuss the differences between our approach and that of [AA]. Specifically, while [AA] also trains an auxiliary classifier for membership inference, their work focuses on image classification models and operates in a white-box setting with access to the model's gradients. Our method is designed for LLMs and operates in a gray-box setting, relying solely on input-output behavior without access to model internals.\\n\\n[AA] : Milad Nasr, Reza Shokri, and Amir Houmansadr. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In 2019 IEEE symposium on security and privacy (SP)\\n\\n-----\\n\\n**(q1)- How does the member data of the embedding model impact the performance of the SMIA**\\n\\n In our experiments, we utilized the Cohere v3 model to generate embeddings for the input samples. The training data for Cohere v3 is not publicly disclosed, so we do not have specific information about whether it includes our member data. As a result, we cannot definitively assess how the presence or absence of member data in the embedding model's training set influences the performance of the SMIA.\\n\\nHowever, we operate under the assumption that large pre-trained models like Cohere v3 provide robust and generalizable embeddings for a wide range of textual inputs, regardless of whether they have encountered those exact samples during training. These models are designed to capture semantic relationships and produce meaningful representations even for novel inputs.\\n\\nOur rationale is that the embeddings generated by Cohere v3 effectively capture the semantic content and nuances of the target samples, which is crucial for the SMIA to function correctly. Since the embedding model aims to generalize well across different data, we expect it to produce consistent embeddings for both member and non-member samples.\\n\\n----\"}", "{\"title\": \"Rebuttal to Reviewer 8LnL (part 1)\", \"comment\": \"We thank the reviewer for their feedback. We provide responses for the weaknesses and questions raised below:\\n\\n-----\\n\\n**(w1)- There are various works at this point in literature which have argued that \\\"WC\\\" based assessment of MIAs when data is split across a cut-off date, is not sound. This work bases many of their gains on that setting.** \\n\\nWe agree that evaluating membership inference attacks using datasets split across a cut-off date (e.g., 'WC' dataset) can lead to unsound assessments due to temporal distribution shifts. To address this concern, our main experiments, as detailed in Sections 4.2.1 and 5.1, are conducted using the 'WT' dataset. In 'WT', both members and non-members are drawn from the same distribution: members come from the training split of the Pile dataset, and non-members are from the test split of Pile. This ensures that the data distributions for members and non-members are consistent, providing a sound basis for evaluating MIAs. \\n\\n We included the 'WC' dataset to demonstrate how temporal shifts can lead to misleading conclusions about privacy leakage. As reported in Section 5.1, we observed significant differences in MIA performance between the 'WT' and 'WC' datasets. For instance, the SMIA method achieves an AUC-ROC of 67.39% on 'WT' but increases to 93.35% on 'WC' for the Pythia-12B model. Similarly, the true positive rates at low false positive rates improve markedly on 'WC' compared to 'WT' (e.g., TPR of 3.8% vs. 46.2% at 2% FPR) which is consistent with findings from cited papers [1,3].\\n\\n---\\n\\n**(w2)- The method should be tested across all 20 train-test splits of Pile. Paper [2] on LLM Dataset Inference particularly shows how most MIAs perform well only on a few datasets.... Given that the goal of MI in LLMs is hard, I would love to see experiments on dataset inference.**\\n\\nWe understand the importance of evaluating our method across diverse datasets. However, we'd like to clarify a key distinction: The paper you mentioned introduces a dataset inference attack aimed at detecting whether an entire dataset was used in a model's training. In contrast, our research focuses on membership inference attacks targeting individual training samples of much smaller size (between 130 to 150 words). These are fundamentally different problems: dataset inference operates at the dataset level, while our MIA operates at the individual sample level. Because of this difference, the authors in [2] evaluated their approach on all train-test splits of the Pile to suit their specific problem.\\n\\nTesting our method across all train-test splits of the Pile would be computationally infeasible due to the sheer scale of the data. Additionally, some splits contain copyrighted material that we cannot use due to legal and ethical considerations. To the best of our knowledge, no prior work on MIAs has considered all splits of the Pile for individual sample inference, largely because of these constraints.\\n\\n Most existing works in this area utilize the MIMIR dataset, which we also employ in our study. For our experiments, we selected four significant and commonly used subsets of the Pile: Wikipedia (en), GitHub, PubMed Central, and ArXiv. Notably, Figure 3 in the cited paper [2] shows that these four datasets are the hardest to analyze, with AUC-ROCs close to 50%, indicating performance near random chance.\\n\\n----\\n\\n**(w3)- The blind baseline of n-gram is not considered in this work (paper [3]). This is an important comparison to understand if the gains in this work are meaningful.**\\n\\nTo address this, we have added Section C.6 to our paper, where we implement two blind attacks from Das et al. (2024): (a) greedy rare word selection (n-gram approach) and (b) bag-of-words classification. These attacks operate without any signal from the target model, and serve to evaluate whether our gains are meaningful beyond exploiting distribution shifts. Our results, summarized in Table 8, show that these blind attacks achieve high AUC-ROC scores on the WC split (58.30% and 83.6%, respectively), indicating that WC is susceptible to exploitation due to distributional differences between members and non-members. As discussed in Section 5.1, WC contains distribution-shifted data, and we acknowledge that this can inflate the performance of MIAs that exploit such shifts. Our main evaluations focus on the WT dataset, where members and non-members are from closely aligned distributions. On the WT split, the same blind attacks perform no better than random guessing, with AUC-ROC scores around 52% and TPRs at low FPRs near zero. In contrast, our proposed SMIA method maintains strong performance on WT (as shown in Tables 7 and 8), effectively detecting membership where blind attacks fail. This demonstrates that SMIA is not merely capturing distribution shifts but is robustly inferring membership information based on the model's learned representations, thereby addressing true privacy leakage concerns.\"}", "{\"title\": \"Rebuttal to Reviewer 31Mw (part 2)\", \"comment\": \"**(q1)- The scalability of SMIA deserves further exploration - how would the method perform with larger training datasets and more complex model architectures? The current evaluation uses 6,000 members and non-members each, which may not fully reflect real-world scenarios.**\\n\\n\\nIn Figure 3, we investigated the impact of varying training dataset sizes on the performance of SMIA by experimenting with 6,000/6,000, 4,000/4,000, 2,000/2,000, and 1,000/1,000 members and non-members. The results demonstrate that increasing the size of the training data enhances the effectiveness of SMIA, indicating its potential scalability. However, we acknowledge that scaling to much larger datasets and more complex models would entail significantly higher computational costs, particularly for neighbor generation and embedding calculations. Addressing these computational challenges and extending our evaluation to larger, real-world scenarios is an important direction for future work.\\n\\n----\\n\\n**(q2)- The choice of neural network architecture for the SMIA model itself appears somewhat arbitrary. Have the authors explored alternative architectures that might improve performance or efficiency? Additionally, how sensitive is the method to the quality and dimensionality of the semantic embeddings?**\\n\\nWe based our architecture on the work of Nasr et al. (2019), which has proven effective in white-box MIAs against deep neural networks in vision tasks. To explore alternative architectures and assess sensitivity to embedding quality and dimensionality, we have added Section C.7 with additional ablation experiments. Specifically, Table 9 presents results using the E5-mistral-7b-instruct embedding model (with 4096 dimensions) alongside the original Cohere v3 model (with 1024 dimensions), demonstrating consistent performance across different embeddings. Additionally, Table 10 investigates the impact of varying classifier network sizes by comparing our original network with a smaller linear classifier and a larger network with extra fully connected layers. These findings collectively highlight SMIA's robustness and adaptability to different model configurations, suggesting that its performance is not overly sensitive to the choice of neural network architecture or embedding model used.\"}" ] }
EwRxk3Ho1V
Beyond Cosine Similarity: Introducing the Unified semantic Similarity Metric Benchmark (USMB) for Text Similarity Measurement
[ "Samarth Goel", "Reagan Lee", "Kannan Ramchandran" ]
Text embedding models are increasingly utilized in production across various applications, from Information Retrieval (IR) to document parsing, but relatively little research has been focused on how to best utilize these embeddings for downstream tasks. While cosine similarity, a popular measure of embedding and text similarity, is widely used, it may not be the strongest metric choice for all tasks. In this work, we introduce the Unified semantic Similarity Metric Benchmark (USMB), a novel leaderboard for text similarity metrics composed of 5 unique tasks and 30+ datasets with the goal of providing a standardized means of measuring the effectiveness of a text similarity metric on a suite of challenging tasks encompassing the nuances of semantic understanding. Additionally, we demonstrate that while cosine similarity achieves the highest score on our benchmark of any pre-existing metric, developing a task-specific ensembled model using our metrics leads to a 40.3\% increase in benchmark performance relative to cosine similarity. We hope that through this work, greater attention can be given to potential performance gains through metric selection and that the field's ability to measure semantic similarity advances as a result.
[ "Deep Learning or Neural Networks", "Similarity and Distance Learning", "(Application) Natural Language and Text Processing", "(Cognitive/Neuroscience) Language" ]
Reject
https://openreview.net/pdf?id=EwRxk3Ho1V
https://openreview.net/forum?id=EwRxk3Ho1V
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ywkQp8ocVM", "kn358XO9Uk", "dIC9I27p5f", "XweZunlhK5", "WJML7xUPKF", "D32K6nNI2Z", "60UITohC5o" ], "note_type": [ "decision", "official_review", "official_review", "official_review", "official_comment", "meta_review", "official_review" ], "note_created": [ 1737524210969, 1730715585213, 1730622775182, 1730709508244, 1732162983508, 1734606083426, 1730501082490 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12725/Reviewer_Knn5" ], [ "ICLR.cc/2025/Conference/Submission12725/Reviewer_tCxV" ], [ "ICLR.cc/2025/Conference/Submission12725/Reviewer_npqy" ], [ "ICLR.cc/2025/Conference/Submission12725/Area_Chair_6JXR" ], [ "ICLR.cc/2025/Conference/Submission12725/Area_Chair_6JXR" ], [ "ICLR.cc/2025/Conference/Submission12725/Reviewer_zikP" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": [\"The authors focus on the widespread use of cosine similarity between text embeddings in various NLP-related tasks and attempt to validate the appropriateness of this approach across diverse datasets and task settings.\", \"Validation on existing datasets and tasks, as well as on new datasets with added perturbations, showed that ensembling simple metrics like cosine similarity and BM25 yields better performance than using cosine similarity alone.\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper tackles an appealing problem. Cosine similarity is widely used, yet the reasons for its effectiveness, especially from a theoretical perspective, remain unclear. New findings on this problem could provide valuable information for many NLP/ML practitioners. Generally, papers that carefully question de facto approaches tend to be of interest to both researchers and practitioners, and this paper follows this line.\", \"The experimental setup of adding perturbations to existing datasets is interesting. In particular, the figures, such as Fig. 1 and 2, are novel to me, and they are likely to offer valuable insights to many researchers in textual similarity.\"], \"weaknesses\": [\"The contributions are all marginal, making it difficult to say that the work meets the quality expected for acceptance at a top conference in representation learning.\", \"The empirical takeaway can be summed up as \\u201censembling improves scores,\\u201d which is somewhat trivial.\", \"The authors claim to introduce a benchmark, but in reality, they simply use several existing datasets in their experiments. The proposal to add perturbations is appealing, and the semi-automated generation of parts of the summarization dataset is also a contribution. However, describing this as a benchmark proposal seems somewhat of an overclaim.\", \"There is no theoretical contribution. If an ICLR reader were to read the first paragraph of this paper, they would likely expect some theoretical implications regarding the strengths or weaknesses of cosine similarity. Building on existing work, such as Zhelezniak et al. NAACL 2019 (arXiv: 1905.07790), to offer new insights would enhance the appeal of this paper.\"], \"questions\": [\"It seems that the validity of using cosine similarity for test-time evaluation might depend on which metric\\u2014cosine, inner product, or L2\\u2014is used during the training of text embedding models, particularly in contrastive learning. In \\u00a73.1, many models are mentioned, but are there any implications in the experimental results regarding the training-test discrepancy?\", \"The proposal to add perturbations is interesting, but the assumption that semantic similarity decreases linearly with the degree of perturbation seems somewhat simplistic. Is this linearity actually observed across a large number of specific texts? If not, are there any solutions to address this?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a novel benchmark for semantic text similarity. The benchmark consists of 5 semantic tasks with 30 datasets for the evaluation of similarity approaches. The main contributions are: i) semantic similarity benchmark, and ii) comprehensive evaluation of similarity approaches. The benchmark shows that ensemble methods outperform standard similarity metrics (e.g. cosine similarity).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Benchmark for semantic text similarity with data and different baselines.\", \"Clear description of background knowledge and related work needed to understand the proposed benchmark and approaches.\", \"The authors perform a comprehensive comparison of metrics and neural models with the proposed benchmark.\"], \"weaknesses\": [\"Dependence on monolingual (e.g. English) datasets.\"], \"questions\": [\"Please address the following questions during the rebuttal:\", \"Please define the language or languages used by the benchmark.\", \"Elbarotate on possibilities to extend the benchmark into a multilingual setting.\", \"Please speculate if an evaluation under OOD data could be managed by the robustness section of the benchmark. For example, domain shift in the data.\"], \"extra\": [\"In line 289 there is an error with the anonymous reference.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"I have no concerns.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper explores ensembling different similarity measures and evaluates their performance across datasets designed to assess five aspects of semantic similarity: alignment with human judgment, robustness to text transformations, sensitivity to unrelated information, clustering effectiveness, and retrieval robustness. Focusing on text similarity, the authors employ multiple models to generate text embeddings throughout their experiments. By fitting task-specific ensembles for each category, the authors achieve higher scores compared to the individual similarity metrics. To standardize this approach, the authors propose a benchmark for semantic similarity metrics, outlining certain similarity aspects and relevant datasets for each category.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"Combining different similarity metrics is promising, mainly as it seems to offer better alignment with human perceptions of similarity. While this isn\\u2019t explicitly addressed in the paper, it\\u2019s reasonable to hypothesize that each metric captures unique aspects of similarity, allowing for meaningful combinations.\\n\\nThe experiments are broad, covering multiple datasets and embedding models. They offer a valuable overview of the strengths and limitations of individual metrics, highlighting issues such as cosine similarity\\u2019s poor robustness.\", \"weaknesses\": \"The related work is superficial, mainly listing standard similarity metrics without providing much context. Moreover, the discussion omits relevant metrics geared toward similarity in neural network representations, such as CKA (Kornblith et al., 2019) and generalized shape metrics (Williams et al., 2021). These metrics should be included and compared in the experimental setup, as they represent significant approaches to semantic similarity, particularly relevant in the context of LLMs. Overall, the experiments lack stronger baselines.\\n\\nThe evaluation setup lacks proper grounding, as it involves comparing five standard similarity metrics against an ensemble specifically fitted to each dataset category. A key issue here is that the ensemble is directly trained on each dataset, meaning it has been optimized to perform well within the specific context of the data it\\u2019s evaluated on. This setup naturally leads to higher scores for the ensemble, as it is tuned to the particular nuances of the dataset, while the individual similarity metrics \\u2014 such as cosine similarity, which is simply calculated as the cosine of the angle between two text embeddings \\u2014 remain static and unoptimized for the dataset in question. To assess the practical usefulness of this ensembled metric, it should ideally be evaluated on entirely different datasets than those used for training. This approach would better reflect a real-world scenario, where in practice, one does not typically have access to labeled texts for fine-tuning a model to every new dataset. Evaluating it only on the datasets it has been fitted to limits the validity of any claims about its general performance or superiority over other metrics. Additionally, some critical aspects are overlooked, such as a comparison of the runtime and computational demands of the ensemble relative to individual metrics. \\n\\nWhile most of the chosen aspects of similarity are reasonable, the approach to sensitivity raises some concerns. The authors state, for example, that \\\"adding a needle that is 100% the length of the original text should decrease similarity by 50%, and removing 25% of the text should reduce similarity by 25%.\\\" This implies a linear relationship between similarity and text length, which may not be a desirable property for a similarity metric. For instance, summaries are often much shorter than the original texts yet should still yield high similarity scores if they effectively capture the main content. In such cases, the similarity score should depend more on the content and relevance of the \\u201cneedle\\u201d rather than simply on its length.\\n\\nMoreover, this approach does not distinguish between scenarios where the added or removed text may be semantically unrelated, redundant, or adversarial, all of which could influence similarity differently. Without further clarification, this linear assumption risks oversimplifying the complexity of semantic similarity, as it neglects the roles of content and context in determining meaningful similarity scores.\\n\\nThe wording is sometimes imprecise and difficult to parse. For example, lines 105\\u2013107 could mislead readers into thinking that RAG is the only application of information retrieval in LLMs. Similarly, lines 178\\u2013180 are confusing due to unclear phrasing, making it difficult for readers to grasp the intended meaning.\\n\\nThe proposed benchmark feels like an afterthought, only mentioned toward the end. Additionally, the title seems misaligned with the paper's primary focus, which centers more on the empirical evaluation of standard similarity metrics compared to their ensembled versions. Overall, the paper lacks substance, as it offers no significant contribution. The practical utility of the ensembled metric is unclear in this evaluation setup. While the benchmark and the selected aspects of similarity are reasonable and appear relevant, they lack theoretical grounding and clear motivation.\", \"questions\": [\"How are categorical decisions determined when using similarity metrics that output a continuous score between 0 and 1?\", \"What exactly is the content of the \\\"needle\\\"? Are there multiple types, as suggested by the phrase, \\\"...we add an irrelevant or adversarial piece of text ('needle')...\\\"?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reminder: Please respond and update the score if necessary\", \"comment\": \"Dear Reviewers,\\n\\nKindly ensure that you respond proactively to the authors' replies (once they are available) so we can foster a productive discussion. If necessary, please update your score accordingly. We greatly appreciate the time and effort you\\u2019ve dedicated to the review process, and your contributions are key to making this process run smoothly.\\n\\nThank you,\\n\\nAC\"}", "{\"metareview\": \"The paper investigates the effectiveness of cosine similarity in NLP tasks and examines the benefits of combining different similarity measures. Through their research, the authors demonstrate that ensembling simple metrics like cosine similarity and BM25 often outperforms using cosine similarity alone. They introduce the Unified Semantic Similarity Metric Benchmark (USMB) to evaluate text similarity metrics across five areas: human preference alignment, transformation robustness, information sensitivity, clustering performance, and retrieval robustness. The study finds that while cosine similarity performs well overall, no single metric is best for all tasks. By developing task-specific ensemble models, they achieve superior performance compared to individual metrics.\\n\\nThe reviewers commend the paper for its significant contributions, including the creation of a new evaluation benchmark for text similarity metrics and the finding that combining multiple metrics greatly improves performance. The paper serves as a strong reminder that integrating various similarity measures often outperforms using cosine similarity alone. However, the paper lacks theoretical insights and shallow analysis of cosine similarity. Additionally, the novelty is limited, as it is already well-known that combining string-based and embedding-based similarity metrics enhances performance, a concept extensively studied in the retrieval literature.\\n\\nI recommend rejecting the paper because it lacks theoretical insights and offers limited novelty, as noted by the reviewers. Although the results are intriguing, a more robust analysis is necessary to fully understand the findings presented, including the effectiveness of the proposed similarity measures.\", \"additional_comments_on_reviewer_discussion\": \"No progress has been made, as neither the reviewers nor the authors have participated in any follow-up discussions. I've urged both parties to initiate discussions during the author response period.\\n\\nReviewer zikP conducted a comprehensive review, pointing out critical issues with the experiment setup, which is a major weakness of this work. Additionally, Reviewer Knn5 emphasized the lack of a theoretical foundation, which I believe is essential for an ICLR conference paper to sufficiently support its experimental results.\"}", "{\"summary\": \"This paper introduces the Unified Semantic Similarity Metric Benchmark (USMB), a benchmark for evaluating text similarity metrics across five tasks: human preference alignment, transformation robustness, information sensitivity, clustering performance, and retrieval robustness. The authors evaluate several pre-existing text similarity metrics like cosine similarity, Levenshtein distance, Rouge score, BM25, and Jaccard similarity on these tasks. They find that while cosine similarity achieves the highest overall score among the pre-existing metrics, no single metric dominates across all tasks. To improve performance, the authors develop task-specific ensemble models that combine these metrics. These ensemble models outperform all the pre-existing metrics.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The key contributions of this work are: (1) introducing an evaluation benchmark for text similarity metrics, (2) showing that ensembling multiple metrics can significantly boost performance on this benchmark. The paper does provide a good reminder that combining multiple types of similarity is likely to work better than cosine alone.\", \"weaknesses\": \"Novelty: The novelty is very limited, and the paper makes novelty claims that are not accurate: it is known that combining string-based similarity metrics with embedding-based ones is likely to give better results, and that is even more likely when there is training involved; this was extensively studied for example in the retrieval literature, where it is common to combine metrics based on sparse representations (eg bm25) with embedding based similarity.\", \"methodology\": [\"The paper looks at average scores across different models and draws conclusions regarding similarity metrics. However the performance variation across different models is very relevant: for example, is it the case that some models benefit more than other from metric aggregation? Perhaps strong embedding models benefit much less: this is an important question given that both embedding algorithms and similarity metrics try to solve the same problem. To pick the example of robustness: robustness can be addressed at the embedding level as well as at the similarity metric level. For this reason, the interaction with the embedding model chosen can\\u2019t be ignored.\", \"The benchmark is set up such that you train a metric on the train split of the benchmark and test it on the test portion. I would not trust this setup to make decisions about my metric because it may not generalize to new data or tasks. On the other hand if the downstream task I am targeting is clear to me and I do not need generalization, then I could use a task or a domain-specific setup to train a similarity metric.\", \"The three assumption in lines 250 to 256 seem pretty adhoc, especially the third one. An alternative is to chose a set of downstream tasks and simply test task performance under different perturbations.\"], \"limitations_of_the_benchmark\": \"The benchmark is limited in several dimensions, for example the task of alignment with human preferences is limited to comparisons between machine generated text and human text. Also it does not sound like the humans have access to the reference text, so they are basically solving a different task. Given the name of the task, I expected this to compare automatic similarity scores with human-assigned similarity scores. For the robustness tasks, the assumptions made in evaluating this metric are again limiting, and the data is restricted to summaries alone.\", \"questions\": [\"here are several claims made throughout the paper that are simply not accurate. I would recommend the authors to re-consider such claims or back them up in revisions:\", \"lines 41-45: there is a sore lack of exploration into measuring semantic similarity. similarly in abstract: \\u201crelatively little research in how to best utilize embeddings for downstream tasks\\u201d or lines 511-516: \\u201cthere are no other works that have systematically researched combining multiple text similarity metrics\\u201c: please see the extensive work done on this topic in machine translation evaluation or retrieval for example.\", \"presentation in section 2 is not clear: I recommend formalizing the problem, so that the metrics can be described accurately: for example TF-IDF introduces documents, but its not clear how that relates to comparing 2 arbitrary pieces of text. Also, it would help to clearly distinguish between string-based from embedding based metrics.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
EwFJaXVePU
Adapt-$\infty$: Scalable Continual Multimodal Instruction Tuning via Dynamic Data Selection
[ "Adyasha Maharana", "Jaehong Yoon", "Tianlong Chen", "Mohit Bansal" ]
Visual instruction datasets from various distributors are released at different times and often contain a significant number of semantically redundant text-image pairs, depending on their task compositions (i.e., skills) or reference sources. This redundancy greatly limits the efficient deployment of continually adaptable multimodal large language models, hindering their ability to refine existing skills and acquire new competencies over time. To address this, we reframe the problem of lifelong Instruction Tuning (LiIT) via data selection, where the model automatically selects beneficial samples to learn from earlier and new datasets based on the current state of acquired knowledge in the model. Based on empirical analyses that show that selecting the best data subset using a static importance measure is often ineffective for multi-task datasets with evolving distributions, we propose Adapt-$\infty$, a new multi-way and adaptive data selection approach that dynamically balances sample efficiency and effectiveness during LiIT. We first construct pseudo-skill clusters by grouping gradient-based sample vectors. Next, we select the best-performing data selector for each skill cluster from a pool of selector experts, including our newly proposed scoring function, Image Grounding score. This data selector samples a subset of the most important samples from each skill cluster for training. To prevent the continuous increase in the size of the dataset pool during LIT, which would result in excessive computation, we further introduce a cluster-wise permanent data pruning strategy to remove the most semantically redundant samples from each cluster, keeping computational requirements manageable. We validate the effectiveness and efficiency of Adapt-$\infty$ over a sequence of various multimodal instruction tuning datasets with various tasks, including (Knowledge) VQA, multilingual, grounding, reasoning, language-only, and multi-image comprehension tasks. Training with samples selected by Adapt-$\infty$ alleviates catastrophic forgetting, especially for rare tasks, and promotes forward transfer across the continuum using only a fraction of the original datasets.
[ "Multimodal Instruction Tuning", "Continual Learning", "Data Selection" ]
Accept (Poster)
https://openreview.net/pdf?id=EwFJaXVePU
https://openreview.net/forum?id=EwFJaXVePU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sMJpAGXxFf", "rZw4xjODHb", "rXAHHhC59B", "pzBvE305bQ", "onL3nPVuD6", "nLz3HKMmG3", "mnIycS4Nch", "i5LANgjwD4", "i2trPL0Num", "fGglmzmPXL", "exwjcTmzYu", "bIS2HRdHlX", "b6InZZI3Le", "b4goxxibhd", "LC3TrE1NF9", "JTyuzmmec7", "HkzBEpYYfG", "DCr6HHrWiW", "C6eyxO5cz2", "BvVfLgvs4D", "9FbXCfZPLp", "6kUekmCsjG", "3hVC7Jkzn7", "1t0d4Ro0Pp" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_review", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732497525230, 1732306337550, 1732175737812, 1732175781002, 1734645465139, 1732685173675, 1732665289227, 1732157021778, 1732498052810, 1732159395974, 1732431604531, 1733033550112, 1732033131966, 1732298002058, 1730890938662, 1737523676861, 1730877545880, 1732723998058, 1732157910370, 1732497274470, 1731116747722, 1733033590212, 1730255901244, 1732298325997 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5008/Authors" ], [ "ICLR.cc/2025/Conference/Submission5008/Reviewer_eQ3c" ], [ "ICLR.cc/2025/Conference/Submission5008/Authors" ], [ "ICLR.cc/2025/Conference/Submission5008/Authors" ], [ "ICLR.cc/2025/Conference/Submission5008/Area_Chair_Fz7F" ], [ "ICLR.cc/2025/Conference/Submission5008/Reviewer_bvXZ" ], [ "ICLR.cc/2025/Conference/Submission5008/Authors" ], [ "ICLR.cc/2025/Conference/Submission5008/Authors" ], [ "ICLR.cc/2025/Conference/Submission5008/Authors" ], [ "ICLR.cc/2025/Conference/Submission5008/Authors" ], [ "ICLR.cc/2025/Conference/Submission5008/Authors" ], [ "ICLR.cc/2025/Conference/Submission5008/Authors" ], [ "~Lai_Wei7" ], [ "ICLR.cc/2025/Conference/Submission5008/Authors" ], [ "ICLR.cc/2025/Conference/Submission5008/Reviewer_bvXZ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5008/Reviewer_VSGx" ], [ "ICLR.cc/2025/Conference/Submission5008/Authors" ], [ "ICLR.cc/2025/Conference/Submission5008/Authors" ], [ "ICLR.cc/2025/Conference/Submission5008/Authors" ], [ "ICLR.cc/2025/Conference/Submission5008/Reviewer_9s1q" ], [ "ICLR.cc/2025/Conference/Submission5008/Authors" ], [ "ICLR.cc/2025/Conference/Submission5008/Reviewer_eQ3c" ], [ "ICLR.cc/2025/Conference/Submission5008/Authors" ] ], "structured_content_str": [ "{\"title\": \"A gentle reminder\", \"comment\": \"Dear Reviewer bvXZ,\\n\\nThank you for your effort in reviewing our paper. We kindly notify you that the end of the discussion stage is approaching. Could you please read our responses to check if your concerns are clearly addressed? During the rebuttal period, we made every effort to address your concerns faithfully:\\n\\n- We have additionally demonstrated strong generalizability of the proposed LAMP in the language domain.\\n- We have clarified the reviewer's question regarding the relevance between t-SNE stochasticity and model performance.\\n\\nThank you for your time and effort in reviewing our paper and for your constructive feedback, which has significantly contributed to improving our work. We hope the added clarifications and the revised submission address your concerns and kindly request to further **reconsider the rating/scoring**. We are happy to provide further details or results if needed. \\n\\nWarm Regards, \\nAuthors\"}", "{\"comment\": \"Thank you for the clarifications and the additional experiments, which address most of my concerns. Besides, the additional visual chat example effectively demonstrates the proposed method's applicability and effectiveness in real-world MLLM scenarios. I do agree that the proposed multimodal lifelong instruction tuning framework is practical and appreciate the improvements achieved by the authors. Therefore, I'd be willing to raise the score to lean towards accept.\"}", "{\"comment\": \"Dear Reviewer VSGx,\\n\\nThank you for reviewing our paper and recognizing the merits of our work. Please see our response to the weaknesses below:\\n\\n**Sequential Learning**: We wholeheartedly agree that sequential learning is not a practical approach since most LLMs and MLLMs rely on multi-task instruction tuning from a massive dataset. Our work is motivated by this very fact, and hence, in contrast to most continual learning works that train the model on a single task at each time step, *we train our model on multi-task datasets at each timestep*. We consider the scenario where multiple multi-task instruction tuning datasets are available for training and there is significant overlap between tasks as well as some rare tasks that appear in one or more datasets only. This scenario is more practical because we see frequent releases of new multi-task instruction tuning datasets in the research community. Our experimental setup and proposed method enable the learning of new tasks and reinforcing as well as forward transfer of old tasks without having to retrain the model from scratch. We have emphasized and expanded these points in the revised draft (see the blue text in the Introduction).\\n\\n\\n\\n**Computational Complexity of Data Selection**: Random pruning is a rather effective method when it comes to data selection, as reported in many prominent works in the data selection literature. For instance, [4] reports that selecting 5% instruction tuning data via random pruning results in performance that is very close to that with 100% data; their proposed method i.e., gradient-based similarity results in a few points improvement over random. [5] show that it is hard to beat random pruning in very high pruning settings for the ImageNet dataset. When the availability of computational resources is low, random pruning is indeed a promising tradeoff for gain vs. compute. Nevertheless, data selection methods, including LAMP, are worth investigating for several reasons:\\n\\n- Selecting representative and important data samples can accelerate training [6, 7]. Besides, it is important from the perspective of scientific research as to why one data subset works better than another subset, selected randomly or otherwise. Developing data selection methods that are guided by concrete and intuitive hypotheses is the best way to understand this science. \\n- Data selection methods either require importance scores or high-dimensional representations of data samples in order to represent the data landscape correctly and incur computational costs in this process. Similarly, LAMP uses high-dimensional representations for selecting data correctly and as we show in Table 2, exceeds the performance of random pruning by >5%, yielding important insights about data selection for lifelong learning. More importantly, LAMP promotes the forward transfer of skills during lifelong learning (109% relative gain) whereas random selection falls short (95.3% relative gain). This suggests that random selection is sub-optimal in practical settings where tasks can be unbalanced, redundant, or rare. Such scenarios need more sophisticated techniques for effective learning.\\n- We adopt several techniques that significantly reduce the computational complexity of our method such as \\n\\n (i) random projections for reducing memory footprint of high-dimensional vectors, \\n\\n (ii) MiniBatch k-Means for clustering of large datasets, and \\n\\n (iii) systematic compression of datasets at each time step using deduplication (as outlined in Lite-Lamp, Section 4.4). \\n\\nLAMP can be made further optimized using libraries like faiss that quantize the high-dimensional space for fast nearest neighbor computations. Moreover, methods like LAMP will continue to reap benefits from faster inference algorithms. Thus, we think that LAMP is a computationally efficient approach to practical data selection for lifelong learning. \\n\\n**LAMP vs. Lite-LAMP**: The relative performance drops due to the reduction in budget size. Larger data budget enables more diversity in the data that is selected for training the model in the next step whereas a reduction in data size leads to lesser diversity and smaller improvements. It is notable that in spite of the significantly lower data budget in Lite-LAMP, our method mitigates catastrophic forgetting at all timesteps and preserves ~100% relative gain.\"}", "{\"comment\": \"**Additional Scoring Functions**: Thank you for the suggestion regarding the ablations of scoring functions used in our experiments. The scoring functions we have chosen, i.e., Image Grounding score, EL2N, Entropy, and Perplexity, are based on promising preliminary results on the individual score-selection methods in our experiments (see corresponding rows in Table 2) as well as existing literature [1, 2]. These score functions have been shown to perform well for tasks such as image classification (EL2N, entropy) and language modeling (perplexity). To conduct ablations on our multi-way setup, we ran the following experiments:\\n\\n(a) *Additional scoring functions*: We added GraND [2] and AUM [3] to the set of score functions (i.e., a total of 6 functions) for LAMP.\\n\\n(b) *LAMP without Image-Grounding score*: We are running an experiment using only EL2N, Entropy, and Perplexity as the set of scoring functions.\\n\\nFrom (a), we did not see any improvements over our best results with LAMP using four scoring functions because GraND and AUM functions did not score high in entropy for any of the pseudo-task clusters. We will report on results from (b) as soon as they are available. From these partial results, we can conclude that a score function is useful only when it is highly discriminative for a task cluster. We will add these additional results and discussion to the camera-ready version of the version.\\n\\n\\n[1] Marion, Max, et al. \\\"When less is more: Investigating data pruning for pretraining llms at scale.\\\" arXiv preprint arXiv:2309.04564 (2023).\\n\\n[2] Paul, Mansheej, Surya Ganguli, and Gintare Karolina Dziugaite. \\\"Deep learning on a data diet: Finding important examples early in training.\\\" Advances in neural information processing systems 34 (2021): 20596-20607.\\n\\n[3] Pleiss, Geoff, et al. \\\"Identifying mislabeled data using the area under the margin ranking.\\\" Advances in Neural Information Processing Systems 33 (2020): 17044-17056.\\n\\n[4] Xia, Mengzhou, et al. \\\"Less: Selecting influential data for targeted instruction tuning.\\\" arXiv preprint arXiv:2402.04333 (2024).\\n\\n[5] Zheng, Haizhong, et al. \\\"Coverage-centric coreset selection for high pruning rates.\\\" arXiv preprint arXiv:2210.15809 (2022).\\n\\n[6] Goyal, Sachin, et al. \\\"Scaling Laws for Data Filtering--Data Curation cannot be Compute Agnostic.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. \\n\\n[7] Evans, Talfan, et al. \\\"Data curation via joint example selection further accelerates multimodal learning.\\\" arXiv preprint arXiv:2406.17711 (2024).\"}", "{\"metareview\": \"This work investigates Lifelong Instruction Tuning (LiIT) via dynamic data selection, which can learn new skills while alleviating the forgetting rate. Before rebuttal, the reviews were mixed and the concerns were about novelty, generalization for LLMs, efficiency, etc. The discussion addressed most of the concerns. After rebuttal, all reviewers had the positive rating due to the practical settings and promising performance. Please incorporate suggested experiments and comments from reviewers in the revised submission.\", \"additional_comments_on_reviewer_discussion\": \"Before rebuttal, the overall rating was mixed. After discussion, Reviewer bvXZ increased the score to 8 and Reviewer eQ3c changed the score to positive. All reviewers agree to accept the work.\"}", "{\"comment\": \"Thanks for the detailed reply. The authors have adequately addressed my concerns. I thus increase the score. I suggest authors add the additional LLM experiments to the appendix upon publication.\"}", "{\"title\": \"Additional Results and Gentle Reminder\", \"comment\": \"Dear Reviewer VSGx,\\n\\nWe have results for experiment (b) for **Additional scoring functions** mentioned in our previous response. The results indicate that the multi-way pruning module in LAMP is effective without our Image-Grounding score too.\\n\\n|Methods|Relative Gain|Forgetting rate|Avg. Accuracy|\\n|---|---|---|---|\\n| IG Score only | 92.3 | 5.6 | 45.6 |\\n|(**new**) LAMP (without IG Score) | 106.9 | 0.3 | 51.2 |\\n|LAMP (all 4 scoring functions)| 109.7 | 0.4 | 52.5|\\n\\nEvidence shows that the multi-way pruning method is very effective when multiple *good* scoring functions are candidates. This also suggests that the performance with our method can improve with better scoring functions, making LAMP a versatile method. We will add these ablations to the camera-ready version of the paper.\\n\\nIn summary, we have discussed the practicality of our lifelong instruction tuning setting and demonstrated the efficiency and generalizability of our proposed LAMP method (please see additional results in the updated pdf). Thank you for your time and effort in reviewing our paper and for your constructive feedback, which has significantly contributed to improving our work. We hope the added clarifications and the revised submission address your concerns and kindly request to further reconsider the rating/scoring. We are happy to provide further details or results if needed.\\n\\nWarm Regards,\\n\\nAuthors\"}", "{\"title\": \"General Comments\", \"comment\": [\"We thank the reviewers for their time and valuable comments. We appreciate that the reviewers have recognized:\", \"**Authors\\u2019 deep understanding of the field and direction**, providing readers with extensive knowledge and unique insights. [9s1q]\", \"The proposed Lifelong instruction learning setting is **more practical in real-world applications**. [bvXZ, eQ3c]\", \"**Clear research objectives and a logical structure**. [9s1q, VSGx]\", \"**The proposed method appropriately addresses the challenges** in the scenario. [bvXZ, VSGx]\", \"**Extensive experiments and valuable analyses**. [9s1q, eQ3c, bvXZ]\", \"**The paper is written very well; well organized, and easy to follow**. [bvXZ, eQ3c, VSGx]\", \"During the rebuttal period, we made every effort to address all the reviewers' concerns faithfully. We hope that our responses have addressed your comments. We thank you again for reviewing our work.\"]}", "{\"comment\": \"Dear Reviewer eQ3c,\\n\\nThank you for taking the time to review and discuss our responses and revisions in detail. We are grateful for these thoughtful feedback and discussions. \\n\\nWe truly appreciate your support and the updated score!\\n\\nBest,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer bvXZ,\\n\\nThank you for reviewing our work and appreciating the rigorous thoughts and experimental analysis that went into the paper. Please see our response to the weaknesses below:\\n\\n**LAMP for LLMs**: Thank you for the suggestion, we agree that the data selection method designed in our paper can be extended to LLMs as well, and will be exciting future work. During the rebuttal period, we have decided to add experiments using LAMP on LLMs only. One thing to note is that LLMs generally undergo multiple post-training phases after instruction tuning, such as alignment and other fine-tuning stages. On the other hand, our method and the proposed lifelong instruction tuning scenario focus on the instruction tuning phase, and it might be important to understand how data selection interacts with these subsequent post-training phases, which will be highly practical and exciting future research direction in lifelong learning of LLMs with data selection.\", \"our_experimental_setup_is_as_follows\": [\"*Model*: LLaMA3-8B\", \"*Datasets in the order of training*:\", \"Natural Instructions [~1600 NLP tasks]\", \"Alpaca CoT [geared towards Chain-of-Thought ability]\", \"xP3 [Multilingual]\", \"UltraChat [Multi-turn dialogues]\", \"Firefly [Chinese only]\", \"Since training and evaluation takes substantial time, we plan to report our results on this experimental setup by this Friday.\", \"**t-SNE Seed**: We have used the t-SNE visualizations to only understand the spatial distribution of feature vectors from different sources. We repeated the t-SNE visualization experiment across 10 seeds and did not find a significant difference in the quality of clusters across these seeds for any of the feature vector sources; the results reported in our paper are stable across seeds. The figures reported in the paper are generated from the same seed for different datasets for fair comparison.\"]}", "{\"comment\": \"We have results from our runs with LAMP and LLaMA3-8B with the datasets enumerated in our earlier response i.e., Natural Instructions, Alpaca CoT, xP3, UltraChat, and Firefly. We use the set of evaluation benchmarks used in [1] to evaluate our models (MMLU, GSM, BBH, TydiQA, CodexEval, AlpacaEval) and compute the following metrics based on performance across these benchmarks: Average accuracy, Relative gain, and Forgetting rate (see Sec. 5.1 in our paper). The results at each timestep are as follows (numbers indicate average accuracy):\\n\\n\\n|Methods|NI (t=1)|Alpaca (t=2)|xP3 (t=3)|UltraChat(t=4)|Firefly(t=5)|\\n|---|---|---|---|---|---|\\n|Sequential|35.1|**43.9**|32.1|41.8|36.8|\\n|Random|35.1|42.7|40.1|42.9|41.8|\\n|Ours|35.1|$\\\\underline{43.3}$|**42.6**|**43.6**|**43.1**|\\n\\n\\nBased on these results, the relative gain and forgetting rate are as follows:\\n\\n|Methods|Relative Gain|Forgetting rate|Avg. Accuracy|\\n|---|---|---|---|\\n|Sequential| - | 18.1 |36.8 |\\n|Random| 91.6 | 5.6 |41.8 |\\n|Ours| **97.8** | **2.3** | **43.1**|\\n\\nThe model is trained on 25k samples at each time step, similar to the MLLM experiments in our paper. We implemented the Lite-Lamp version of our method for these experiments. We see up to 18% forgetting rate in the sequential setting, which comes down to 5.6% and 2.3% with random pruning and LAMP pruning. The average accuracy is highest with our method at the final time step. These results suggest that:\\n\\n(a) lifelong learning from multi-task instruction tuning datasets is a significant problem in the language-only domain as well and\\n\\n(b) our proposed method can significantly alleviate forgetting of skills over time, as well as preserve the maximum accuracy that can be achieved from each dataset.\\n\\n[1] How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources\"}", "{\"title\": \"We have less than two days left in the discussion period.\", \"comment\": \"Dear Reviewer 9s1q,\\n\\nWe sincerely appreciate your efforts in reviewing our paper and your constructive comments. Since there are less than two days left in the discussion period, could you please read our responses to check if your concerns are clearly addressed? We believe that our responses resolved all your concerns.\\n\\nWe understand that the criteria for rating a paper can sometimes be subjective; however, if you agree that our work does not have remaining major concerns, we would like to kindly suggest your re-evaluation of the initial rating of this submission.\\n\\nPlease let us know if you have any remaining questions, and we will be more than happy to address them.\\n\\nBest, \\nAuthors\"}", "{\"title\": \"Thanks for your interesting work\", \"comment\": \"Hope that InstructionGPT-4 (https://arxiv.org/abs/2308.12067), which also focuses on multimodal data selection, can be discussed or included in a part of your paper :)\"}", "{\"comment\": \"Dear Reviewer eQ3c,\\n\\nThank you for reviewing our work, please see our response to the weaknesses and questions below:\\n\\n**Novelty**: \\nWe politely emphasize that our paper introduces three novel components in the research field.\\n\\n1. Introducing the practical and realistic continual learning scenarios for MLLM instructional tuning: Lifelong Instruction Tuning (LiIT) (Lines 52-53 and Lines 72-75) with clarification of four critical challenges.\\n2. A new multi-way data selection approach for LiIT that can aggregate the benefits of various scoring functions based on the task.\\n3. A new scoring function i.e., image grounding score, that can distinguish visually-grounded data samples from the not-grounded samples.\\n\\nThe main novelty of our work lies in our proposed lifelong instruction tuning scenario where *the model is trained on a multi-task instruction tuning at each time step*. This is in stark contrast to the conventional continual learning setup explored in most existing works and is a more practical variation of lifelong instruction tuning based on frequent data releases as seen in the research community today. This scenario brings on unique challenges that have not been tackled in previous continual learning works such as overlapping tasks, redundant data, rare tasks, etc. Our experiments range from frequently used baselines such as multi-task learning, random selection, and scoring-based selection to more sophisticated methods based on high-dimensional representations, yielding insightful results on the nature of lifelong learning in LLMs. Our proposed method is based on systematic insights from these experiments; further, the two-step process of task-based selection and semantic deduplication has not been explored in previous works. Most importantly, we show significant improvements with our method and establish a strong baseline for the scenario of lifelong instruction tuning.\\n\\nAt the same time, we respectfully note that other reviewers (```9s1q, bvXZ, VSGx```) agree with the novelty and significance of our proposed scenario. For instance, Reviewer ```9s1q``` stated, *\\u201c...the paper reflects the authors' deep understanding of the field and direction, providing readers with extensive knowledge and unique insights, ..., opening up new avenues for future research.\\u201d* Additionally, these reviewers recognized that the proposed method appropriately and logically addresses well-structured challenges in lifelong multimodal instruction tuning scenarios, emphasizing the significance and uniqueness of the proposed approach compared to prior methods.\\n\\nThe authors hope that the reviewer eQ3c understands our significance and practicality (often considered another \\u2018novelty\\u2019 from a scientific perspective) in formulating and addressing lifelong multimodal instruction tuning for the real world.\\n\\n**Evaluation on Visual Chat**: Thank you for the great suggestion, we have included examples of visual chat in our paper for our models (Figure 6, 7, and 8). Since there is no formal evaluation benchmark or quantitative metrics for multi-turn dialogue with MLLMs, we used a single representative example for qualitative analysis of visual chat results in the updated draft.\\n\\n Figure 6 shows the multi-chat skills of the LLaVA model at $t$=0. In Figure 7, we note that the models trained using sequential learning (left) and random pruning (right) have poor retention of the multi-chat skill and reply with few-word and/or inaccurate answers. However, our method (results in Figure 8) retains this skill because it pools the multi-chat training data samples into a distinct cluster during the pseudo-task clustering step and selects sufficient samples from this cluster for training at each time step.\\n\\n**Efficiency Analysis**: Thank you for the great suggestion, we have added this to the paper. We present a comparison of the time taken by various data selection methods for our experimental setting (for training with 25k samples at each time step) in Table 6 in the Appendix (see revised pdf). Results are presented for 8 A100 GPUs. The total time taken to train the model without any methodical data selection (i.e., random pruning) is approximately 21 hours. Scoring-based selection methods generally require a forward pass of the model to extract the score (e.g., EL2N, entropy), which takes nearly 48 hours on 8A100 GPUs for all datasets in our experiments. Methods that use feature embeddings (e.g., COINCIDE) takes a similar amount of time since they uses a forward pass to extract hidden layer outputs from the models as data representations. Our proposed method requires longer time i.e., 92 hours, to perform a backward pass over the model and extract gradients, similar to LESS - which is a state-of-the-art method for data selection for targeted instruction tuning. However, with Lite-Lamp, we are able to significantly reduce this time due to systematic compression of the dataset at each timestep.\"}", "{\"summary\": \"This paper introduces Lifelong Multimodal Instruction Tuning, which differs from continual Multimodal Instruction Tuning in that the previous datasets are still included in the dataset pool. To effectively select the data at each timestep, the authors propose a framework that adaptively selects data samples based on their importance and data balance. The experiments show that the method is more effective than baselines.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The considered lifelong setting is more practical than the traditional continual learning setting for multimodal LLMs, since currently the most important aspect of large models is to use all data available better.\", \"The proposed framework considers the key difficulties of data sampling in adapting multimodal LLMs with progressively growing datasets and reasonably tackles the problem with carefully designed pipelines. I find sections 3 and 4 meaningful with rigorous thoughts and experiment analysis.\", \"The experiments are thorough and clearly show the effectiveness of the proposed framework over the considered baselines.\", \"The paper is written very well.\"], \"weaknesses\": [\"The proposed method, except for the image grounding score, seems not to be specifically designed for multimodal LLMs and can be potentially applied to broader domains like pure LLMs. However, the paper only considers multimodal LLMs which lowers the impact and importance of the paper.\"], \"questions\": \"the t-SNE visualization has been shown to have high stochasticity across seeds. Have you manually tuned for your method and other baselines throughout the paper?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper introduces a novel approach, LAMP, for optimizing Lifelong Instruction Tuning (LiIT) of Multimodal Large Language Models (MLLMs). Traditional visual instruction datasets, which are frequently redundant and task-specific, hinder MLLMs' ability to continuously adapt and learn new skills effectively. LAMP addresses this by dynamically selecting and pruning data to maximize training efficiency while minimizing computational demands. It groups data into pseudo-skill clusters, applying adaptive selection to prioritize relevant samples for each skill and using a cluster-wise pruning method to reduce redundancies. This approach mitigates catastrophic forgetting, enhances knowledge transfer, and improves performance on rare tasks, all while leveraging only a fraction of the original dataset. LAMP's adaptive tuning demonstrates significant benefits for sustaining long-term model growth in evolving multimodal learning environments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed method mitigates catastrophic forgetting and supports forward transfer, enabling the model to retain skills and adapt to new tasks seamlessly.\\n2. LAMP\\u2019s cluster-wise pruning manages dataset growth by removing redundancies, keeping training scalable and resource-efficient.\\n3. This paper is well-writing and its organization is logical.\", \"weaknesses\": \"1. A primary concern regarding the proposed sequential data structure and the lifelong-based instruction tuning strategy is their practical applicability, as the current MLLMs typically rely on the integration of diverse tasks together for effective multitask training instead of sequential learning.\\n2. The paper mentioned that (lines 301-302) \\\"our proposed multi-way approach can be seamlessly extended with new scoring functions based on users\\u2019 needs\\\", however, the proposed method only employs four scoring functions in practice and lacks ablation experiments for additional scoring functions. I suggest the authors conduct related experiments to demonstrate the proposed method\\u2019s flexibility and scalability.\\n3. In Tab.2, the effectiveness of the random pruning strategy is notably highlighted. The computational costs and complexities associated with other sophisticated pruning techniques do not appear to correspond proportionately to the performance improvements achieved. This raises questions regarding the efficiency of the proposed pruning methods in comparison to simpler alternatives.\\n4. In Tab.2, although LITE-LAMP utilizes 4 times less training data (25k vs. 100k), its relative improvement dropped significantly when compared to LAMP, i.e., 99.7 vs. 109.7.\", \"questions\": \"Please focus on Weaknesses, and I encourage the authors to provide further discussion and clarification on these points.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your continued engagement and for increasing your score. We are glad we adequately addressed your concerns, and we will definitely add the additional LLM experiments to the appendix upon publication; we believe this result further strengthens the generalizability and effectiveness of our proposed method.\"}", "{\"comment\": \"Dear Reviewer 9s1q,\\n\\nThank you for recognizing the merits of our work. Please see our response to the weaknesses below:\\n\\n**Figures**: Thank you for the suggestions for making the figures more understandable. We have updated the explanations for subfigures A and C in Figure 2 and improved the text font in all of our figures for clarity in the updated version of our paper (see revision).\\n\\n**Computational costs of clustering in LAMP**: We note that our clustering approach incurs only marginal additional computational cost and does not require significant resources. Naive k-means clustering has a computational complexity of $O(tknd)$ where $t$ = iterations of the algorithm, $k$ = number of clusters, $n$=number of samples, $d$=dimension of vectors, implying that the computation costs rise linearly with each of these factors. \\n\\n- The value $t$ is constant in our experiments.\\n- The variables $k$ and $d$ are small in our experiments (see Lines 354-358). The optimal values of $k$ are between 5 and 50, and the value of $d$ is capped at 8192 by virtue of using random projections to compress high-dimensional vectors (Lines 877-881 in Appendix). \\nThe value of $k$ can increase with the size of the datasets, however, there exist many efficient methods for calculating pairwise cosine similarities effectively. We use the efficient implementation of `MiniBatch-KMeans` in scikit-learn to compute clusters for our datasets, which significantly reduces the compute time. \\n- The maximum time taken for a single k-means clustering run in our LAMP experiments using this implementation is 48 minutes on a 48-core CPU for approximately 1.8 million samples. In addition, the `faiss` library can also be used for efficient computation of pairwise cosine similarities using quantization. \\n- Further, our proposed approach `Lite-Lamp` seeks to reduce the number of samples $n$ at each step, effectively reducing the time taken for k-means clustering. The maximum time taken for a single k-means clustering run in Lite-LAMP experiments is **15 minutes** on the 48-core CPU for approximately 660K samples, which makes it **as efficient as other data selection methods** like LESS [1] and COINCIDE [2].\\n\\nSince there are many ways to control each of the variables that contribute to the complexity of k-means clustering ($k$, $n$ and $d$), we think that LAMP is not a prohibitively computationally expensive method.\\n\\n[1] Xia, Mengzhou, et al. \\\"Less: Selecting influential data for targeted instruction tuning.\\\" arXiv preprint arXiv:2402.04333 (2024).\\n\\n[2] Lee, Jaewoo, Boyang Li, and Sung Ju Hwang. \\\"Concept-skill Transferability-based Data Selection for Large Vision-Language Models.\\\" arXiv preprint arXiv:2406.10995 (2024).\\n\\n**Combinatorial Prediction of Function Tools for Sample Selection**: Each function tool assigns a different value to every sample in our dataset. Since each functional tool has its unique range and distribution, we normalize these values to enable comparison of entropies across these tools. We appreciate the suggestion for combining these distributions to enable further gains for our method. We believe this idea presents an exciting direction for future work, potentially improving the robustness and efficiency of data selection in future implementations, but also introduces additional challenges that need to be addressed:\\n\\n1) determining top-$k$ or dynamic number of beneficial function tools for each data pair, and \\n2) controlling the relative influence of predictions.\\n\\nThe second point would be crucial as different tools may have varying degrees of relevancy and reliability for different samples, and an imbalance could affect the robustness of the selection process.\\n\\nIn summary, the authors appreciate the reviewer for raising this constructive discussion and believe that exploring these challenges, as well as developing methods to effectively combine the outputs of multiple functional tools, will be a meaningful direction for future work.\"}", "{\"title\": \"A gentle reminder\", \"comment\": \"Dear Reviewer 9s1q,\\n\\nThank you for your effort in reviewing our paper. We kindly notify you that the end of the discussion stage is approaching. Could you please read our responses to check if your concerns are clearly addressed? During the rebuttal period, we made every effort to address your concerns faithfully:\\n\\n- We have updated the explanations for subfigures A and C in Figure 2 and improved the text font in all of our figures\\n- We provide the details regarding the computational costs of clustering in LAMP. We would like to note that the proposed approach is as efficient as other data selection methods like LESS [1] and COINCIDE [2].\\n- We further provide a discussion regarding the Combinatorial Prediction of Function Tools for Sample Selection, which should be\\b a promising research direction that entails two distinct and meaningful research challenges.\\n\\nThank you for your time and effort in reviewing our paper and for your constructive feedback, which has significantly contributed to improving our work. We hope the added clarifications and the revised submission address your concerns and kindly request to further **reconsider the rating/scoring**. We are happy to provide further details or results if needed. \\n\\nWarm Regards, \\nAuthors\"}", "{\"summary\": \"This is an interesting article that addresses the issue of forgetting in multimodal large models within multi-task and continual learning scenarios from the perspective of instruction dataset selection and proposes effective strategies. Traditional training methods often lead to the model forgetting previously learned skills when new tasks are introduced. However, the article introduces the LAMP (Lifelong and Adaptive Multi-way Pruning) method, which uses gradient vectors for pseudo-task clustering and combines an entropy-maximization strategy for sample selection, achieving effective sample filtering and diversity balance. LAMP employs semantic redundancy detection and coverage-based sampling strategies to prune the data pool, control its size, and prevent unrestrained data pool expansion, while ensuring the representativeness and diversity of tasks. Moreover, training budgets are allocated reasonably across task clusters, allowing the model to be sufficiently trained on each task, thereby reducing forgetting and enhancing generalization capabilities. Overall, LAMP performs well in multi-task and continual learning environments, ensuring that the model retains previously learned skills when training on new tasks, even with limited computational resources, thus achieving efficient and balanced continual learning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This article starts from the current situation where the instruction-tuning datasets used in fine-tuning multimodal large models are continuously increasing and points out the redundancy problem among these datasets. It then proposes a solution to address the issue of continual learning, demonstrating significant practical relevance. In the design of the proposed solution, the authors first analyze the problems and limitations of existing methods, then detail how the new solution effectively addresses these issues, showcasing clear research objectives and a logical structure. The narrative of the paper reflects the author\\u2019s deep understanding of the field and direction, providing readers with extensive knowledge and unique insights. Moreover, the article verifies the correctness and validity of the proposed solution through extensive experiments, opening up new avenues for future research and demonstrating considerable i\", \"weaknesses\": \"Although it is good to see the paper as a whole, there still exist some minor drawbacks in this work.\\n1.\\tIn Figure 2, the meanings of the axes in subfigure C are unclear, and the explanatory content for subfigure A is insufficient, failing to adequately explain the issue that the figure is intended to illustrate.\\n2.\\tThe text descriptions in all the images are not very clear and appear somewhat distorted.\\n3.\\tIn the paper, clustering operations on datasets based on gradients and subsequent pruning of redundant data use k-means and pair-wise cosine similarity, respectively. This computational approach likely incurs significant time and space costs in such large-scale instruction datasets. The question arises as to whether there are corresponding measures to address these challenges.\\n4.\\tDuring the data selection process, a pool of function tools is used, and the function that produces higher average entropy is ultimately chosen. In this process, combining multiple functions can be considered to achieve more robust data selection.\", \"questions\": \"please refer to the weakness part\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"We have less than two days left in the discussion period.\", \"comment\": \"Dear Reviewer VSGx,\\n\\nWe sincerely appreciate your efforts in reviewing our paper and your constructive comments. Since there are less than two days left in the discussion period, could you please read our responses to check if your concerns are clearly addressed? We believe that our responses resolved all your concerns.\\n\\nWe understand that the criteria for rating a paper can sometimes be subjective; however, if you agree that our work does not have remaining major concerns, we would like to kindly suggest your re-evaluation of the initial rating of this submission.\\n\\nPlease let us know if you have any remaining questions, and we will be more than happy to address them.\\n\\nBest, \\nAuthors\"}", "{\"summary\": \"This paper proposes a new scenario of lifelong instruction tuning, where Multimodal Large Language Models (MLLM) continuously learns from new datasets that include both new and redundant data samples. They then propose LAMP, a adaptive data selection approach designed for this context. The authors claims that the proposed method can select beneficial samples to adapt the current model's knowledge to the continuously changing dataset distribution.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The setting of lifelong instruction tuning introduced in the paper is practical in real-world applications.\", \"The paper is overall well organized and easy to follow.\", \"The proposed method achieves convincing results in experiments, and comprehensive ablation studies are provided to validate the effectiveness of individual components.\"], \"weaknesses\": [\"The method does not appear very novel to me, as both gradient-based clustering and ensemble scoring functions for data selection have been explored in previous works.\", \"In evaluation, the paper focuses on tasks with short answers but omits open-ended visual chat, where multimodal large language models (MLLMs) generate long responses. However, visual chat is essential for assessing the comprehensive capabilities of MLLMs.\", \"Compared to random pruning, which already serves as a strong baseline, the proposed method (including the LITE or efficient version) incurs significantly higher computational and time costs, and no efficiency analysis on the computational cost is provided.\", \"Clarification Issues. In Table 2, the authors do not clearly explain what \\\"data size at *t*\\\" refers to.\"], \"questions\": [\"How is \\\"accuracy\\\" measured on LLaVA-Bench?\", \"In table 2, why does multi-task learning performs worse than random pruning which simply uses fewer data (e.g. accuracy 46.1% VS 47.2%) ? Can pruning more data lead to better result?\", \"Can you provide numerical comparisons on the computation cost of the proposed method against the random pruning baseline? Additionally, how does the computation cost of data selection compare with the training cost at step $t$?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Computational Complexity of Data Selection**: Random pruning is a rather effective method when it comes to data selection, as reported in many prominent works in the data selection literature. For instance, [1] reports that selecting 5% instruction tuning data via random pruning results in performance that is very close to that with 100% data; their proposed method i.e., gradient-based similarity results in a few points improvement over random. [2] show that it is hard to beat random pruning in very high pruning settings for the ImageNet dataset. When the availability of computational resources is low, random pruning is indeed a promising tradeoff for gain vs. compute. Nevertheless, data selection methods, including LAMP, are worth investigating for several reasons:\\n\\n- Selecting representative and important data samples can accelerate training [3, 4]. Besides, it is important from the perspective of scientific research as to why one data subset works better than another subset, selected randomly or otherwise. Developing data selection methods that are guided by concrete and intuitive hypotheses is the best way to understand this science.\\n\\n- Data selection methods either require importance scores or high-dimensional representations of data samples in order to represent the data landscape correctly and incur computational costs in this process. Similarly, LAMP uses high-dimensional representations for selecting data correctly and as we show in Table 2, exceeds the performance of random pruning by >5%, yielding important insights about data selection for lifelong learning. More importantly, LAMP promotes the forward transfer of skills during lifelong learning (109% relative gain) whereas random selection falls short (95.3% relative gain). This suggests that random selection is sub-optimal in practical settings where tasks can be unbalanced, redundant, or rare. Such scenarios need more sophisticated techniques for effective learning.\\n\\nWe adopt several techniques that significantly reduce the computational complexity of our method such as\\n\\n(i) random projections for reducing the memory footprint of high-dimensional vectors,\\n\\n(ii) MiniBatch k-Means for clustering of large datasets, and\\n\\n(iii) systematic compression of datasets at each time step using deduplication (as outlined in Lite-Lamp, Section 4.4).\\n\\nLAMP can be made further optimized using libraries like faiss that quantize the high-dimensional space for fast nearest neighbor computations. Moreover, methods like LAMP will continue to reap benefits from faster inference algorithms. Thus, we think that LAMP is a computationally efficient approach to practical data selection for lifelong learning.\\n\\n**Multi-task vs. Random**: Multi-task pruning performs worse than random because the tasks are severely unbalanced across the datasets. For instance, there are nearly 600K samples for captioning and VQA, whereas there are only 20k samples for multilingual and referring expression comprehension. At larger data scales, the unbalanced task subsets lead to drops in performance for the rare tasks, however, in low data regimes (as demonstrated in random pruning), forgetting is less egregious even when the datasets are unbalanced. Especially for instruction tuning, using less but high-quality data does lead to better results, as also previously shown in [1, 5]. The scoring function-guided selection used in LAMP ensures that many of the high-quality data samples are being chosen consistently over time, and results in further improvements beyond random pruning.\\n\\n[1] Xia, Mengzhou, et al. \\\"Less: Selecting influential data for targeted instruction tuning.\\\" arXiv preprint arXiv:2402.04333 (2024).\\n\\n[2] Zheng, Haizhong, et al. \\\"Coverage-centric coreset selection for high pruning rates.\\\" arXiv preprint arXiv:2210.15809 (2022).\\n\\n[3] Goyal, Sachin, et al. \\\"Scaling Laws for Data Filtering--Data Curation cannot be Compute Agnostic.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[4] Evans, Talfan, et al. \\\"Data curation via joint example selection further accelerates multimodal learning.\\\" arXiv preprint arXiv:2406.17711 (2024).\\n\\n[5] Zhou, Chunting, et al. \\\"Lima: Less is more for alignment.\\\" Advances in Neural Information Processing Systems 36 (2024). \\n\\n**Clarifications**:\\n- *Data size at $t$*: This refers to the size of the selected dataset for training at each time step $t$.\\n- *Accuracy of LLaVA-Bench*: This is computed using accuracy from an LLM as judge (GPT-4 in our experiments), as recommended in the original paper.\"}" ] }
Ew3VifXaxZ
Local-Prompt: Extensible Local Prompts for Few-Shot Out-of-Distribution Detection
[ "Fanhu Zeng", "Zhen Cheng", "Fei Zhu", "Hongxin Wei", "Xu-Yao Zhang" ]
Out-of-Distribution (OOD) detection, aiming to distinguish outliers from known categories, has gained prominence in practical scenarios. Recently, the advent of vision-language models (VLM) has heightened interest in enhancing OOD detection for VLM through few-shot tuning. However, existing methods mainly focus on optimizing global prompts, ignoring refined utilization of local information with regard to outliers. Motivated by this, we freeze global prompts and introduce Local-Prompt, a novel coarse-to-fine tuning paradigm to emphasize regional enhancement with local prompts. Our method comprises two integral components: global prompt guided negative augmentation and local prompt enhanced regional regularization. The former utilizes frozen, coarse global prompts as guiding cues to incorporate negative augmentation, thereby leveraging local outlier knowledge. The latter employs trainable local prompts and a regional regularization to capture local information effectively, aiding in outlier identification. We also propose regional-related metric to empower the enrichment of OOD detection. Moreover, since our approach explores enhancing local prompts only, it can be seamlessly integrated with trained global prompts during inference to boost the performance. Comprehensive experiments demonstrate the effectiveness and potential of our method. Notably, our method reduces average FPR95 by 5.17% against state-of-the-art method in 4-shot tuning on challenging ImageNet-1k dataset, even outperforming 16-shot results of previous methods.
[ "Out-of-distribution detection", "Prompt learning", "Vision-language model", "Few-shot learning" ]
Accept (Poster)
https://openreview.net/pdf?id=Ew3VifXaxZ
https://openreview.net/forum?id=Ew3VifXaxZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vy5ka9Rvwq", "uiTH4EO7Dm", "uXDgXiBKBJ", "twbC59gr4G", "soTar0WWZl", "sfhREDo3N1", "sBM6e4NVv9", "rJ6c5Rb8UN", "omE0KEnxA9", "oVOO5ybQtO", "l9KVvpU52B", "f1RwFVl8Qn", "dCkJov10iP", "bSYJ0mRd5O", "aqUCnrkWIY", "X5fKz5hdGt", "WAUzDZyZBX", "UPfIvT8ECM", "TTFMhFlkq5", "TKbOYlONxQ", "OQ77BNYdUu", "MfcYzJRNR3", "KnuT19QcBW", "I6rrHj9ExE", "EGT0aHsc9p", "A2NQio0nm5", "4pIxbw1HjO", "4GiHX1OB00", "3gHIqBeQRh", "0NWl1uryXx", "0LWh0beRai" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1733135730803, 1732262933585, 1730573618333, 1732618199912, 1732618604973, 1733144367672, 1730557264447, 1732522440326, 1732502444001, 1732261921651, 1732618271250, 1732533564997, 1732263276576, 1732502551964, 1732262648575, 1732569624026, 1732262869372, 1735140349775, 1730428924563, 1732263355804, 1732502584271, 1732262586828, 1732263413960, 1737523785640, 1733159454088, 1732262374142, 1732955695510, 1732526738091, 1730610728962, 1732263160283, 1732502521666 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Reviewer_NiJR" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Reviewer_NiJR" ], [ "ICLR.cc/2025/Conference/Submission6694/Reviewer_KX2t" ], [ "ICLR.cc/2025/Conference/Submission6694/Reviewer_KX2t" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Reviewer_NiJR" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Reviewer_GNZR" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Area_Chair_sQtt" ], [ "ICLR.cc/2025/Conference/Submission6694/Reviewer_GNZR" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Reviewer_dbE1" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ], [ "ICLR.cc/2025/Conference/Submission6694/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer dbE1:\\n\\nThank you for taking the time to review our paper and propose constructive suggestion. \\nCould you please kindly review our updated manuscript or previous comments? We have provided detailed responses to each question raised by the reviewer and we believe we have addressed all the concerns in our updated paper and previous responses. Thus, we would appreciate it if the reviewer could view the rebuttal and reevaluate the score for our paper. If the reviewer has more questions, we will try our best to address the problems.\\n\\nBest wishes,\\n\\nSubmission 6694 Authors\"}", "{\"title\": \"Author Response to Reviewer NiJR for Q4-Q5\", \"comment\": \"#### **Q4: Influence of the two hyperparameters.**\", \"a4\": \"In Table.9, we aim to explain that the proposed method is not sensitive to the hyperparameter in a wide range, **at least in the entire ablation process of our practical experiment** (in our experiment 1-10 for $\\\\lambda_{neg}$ and 0.1-1 for $\\\\lambda_{reg}$). Empirically, we find that the ratio of the two corresponding coefficients should be within a wide range (approximately 2-10), and do not carefully select the hyperparameters. We use the two coefficients just for rigorous description of the entire training procedure in Eqn.8 and we highlight that our method still gets robust results without any coefficient parameter ( $\\\\lambda_{neg}=1$, $\\\\lambda_{reg}=1$ in table below) and outperforms previous methods like LoCoOp. We also carry out ablation study about $\\\\lambda_{neg}$ to demonstrate the effectiveness of different modules.\\n\\n|method|iNaturalist||SUN||Places||Texture||Average||\\n|-|-|-|-|-|-|-|-|-|-|-|\\n||FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|\\n|Ours (main paper)|9.65| 97.87| 20.40 |95.57 |29.39| 92.67 |51.20 |88.00 |**27.66** |**93.53**|\\n|Ours (w/o $\\\\lambda_{neg}$)| 9.84 |97.88| 24.37| 94.97| 32.84 |91.94| 50.02 |88.32 |**29.27**| **93.27**|\\n|Ours ($\\\\lambda_{neg}=1$, $\\\\lambda_{reg}=1$)|12.24|97.53|25.11|95.23|31.24|92.46|50.03|88.46|**29.65**|**93.42**|\\n\\n\\n#### **Q5: Nonlinear change in FPR95.**\", \"a5\": \"the attribute of nonlinear characteristics in few-shot learning has been widely observed in the fields of OOD detection[1]. For example, **similar trend is observed** in ID-like[2] (1-shot and 4-shot in Places shown in table below) and LoCoOp[1] (4 shot and 16 shot in SUN and Places shown in Table.1). The reason for the phenomenon could be: **1)** the effectiveness of few-shot learning that effectively learns knowledge of downstream tasks especially in the very few samples; **2)** Moreover, for few-shot OOD detection, more samples may not necessarily bring better performance, and incorporating more samples may confuse the discrimination, which should **be the characteristic of the two datasets**. In Table.8, the reason for the nonlinear change can partly be attributed to that our local-based model learns characteristics of OOD detection with efficiency and gets the best discrimination performance between ID and OOD in FPR95, and more samples just confuse the process.\\n\\n\\n|method|iNaturalist||SUN||Places||Texture||\\n|-|-|-|-|-|-|-|-|-|\\n||FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC\\n|ID-like (1 shot)|14.57 |97.35|44.02 |91.08| 41.74 |91.15 |26.77| 94.38|\\n|ID-like (4 shot)| 8.98 |98.19| 42.03 |91.64| 44.00| 90.57 |25.27 |94.32| 26.08| 94.36|\\n\\n[1] Few-Shot Out-of-Distribution Detection via Prompt Learning. NeurIPS 2023.\\n\\n[2] ID-like Prompt Learning for Few-Shot Out-of-Distribution Detection. CVPR 2024.\"}", "{\"summary\": \"The paper presents a novel approach to few-shot out-of-distribution (OOD) detection. The aim of this paper is at addressing a limitation of existing few-shot OOD detection methods, which predominantly rely on global prompts and struggle to identify challenging OOD images that differ only locally from in-distribution (ID) images.\\n\\nTo enhance detection accuracy for these difficult cases, the proposed method introduces learnable local prompts. Specifically, the proposed method first picks out multiple randomly cropped regions from the original image and classifies them as positive or negative based on their similarity to the global text embedding. These regions are then used as tarining exampes to learn the positive and negative learnable local prompts.\\n\\nExperimental results on several popular benchmark datasets for few-shot OOD detection show that the proposed method can achieve better accuracy than several existing few-shot OOD detection methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper proposes a novel idea of learning local prompts, which has not been well explored in the context of few-shot OOD detection.\", \"Experimental results reported in this paper show that the proposed method outperforms existing methods.\", \"This paper is well-organized and easy to follow.\"], \"weaknesses\": \"My main concerns are about the experimental results.\\n\\n**W1**. Some of the results, such as AUROC in Table 1, show very small performance differences (less than 1%) between the proposed method and the existing methods. Since the proposed method uses random cropping to learn local prompts, the standard deviation of accuracy should be reported to show the significance of the performance differences.\\n\\n**W2**. Looking at Fig. 4, the accuracy differences between Ours w/ LNP and Ours w/o LNP appear to be quite small. LNP is one of the core ideas of the proposed method. If the accuracy differences are small, the superiority of the proposed method would be questionable.\\n\\n**W3**. As stated in the second paragraph of the introduction section, the aim of this paper is to detect hard OOD samples that are similar to the ID classes as a whole, but can be distinguished only by looking at subtle local differences. However, there is no support in this paper for whether the proposed method can actually detect such a hard OOD sample. It would be good to show some examples of OOD images that could not be detected with the global prompts alone, but could be detected with the introduction of the local prompts.\\n\\n**W4**. Based on the results in Table 9, the authors claim that the accuracy of the proposed method is not sensitive to the values of the two hyperparameters $\\\\lambda_{\\\\text{neg}}$ and $\\\\lambda_{\\\\text{reg}}$. However, this claim seems to contradict the effectiveness of the proposed method since the learning of the local prompt, which is the core of the proposed method, is controlled by these two hyperparameters (see Eq. 8).\\n\\n**W5**. Table 8 shows that the FPR95 values change nonlinearly with the number of shots; the FPR95 values improve as the number of shots increases from 4-shot to 8-shot, but deteriorate as it increases to 16-shot. What is the reason for this?\", \"questions\": \"As noted in the Weaknesses section, I believe the experiments have several significant shortcomings. While all of these issues are important, I have listed them in descending order of priority. The first four points (**W1-W4**) directly concern the effectiveness of the proposed method and the motivation of this paper. I would like to encourage the authors to address these points in their response.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Additional Response to Reviewer NiJR for Q2-Q3\", \"comment\": \"We are grateful to the reviewer for taking the time to view the rebuttal and having discussion with us. We sincerely thank the valuable suggestions from the reviewer and we answer remaining questions in detail sequentially.\", \"a2\": \"We would like to make a clarification that ours with LNP is also an OOD metric proposed by us. The two metrics both integrate global and local information. Concretely, the local prompts are also trained by our proposed method and the difference lies in the utilization of local negative prompts. Specific definitions are shown below.\\n\\n$S_{\\\\mathrm{R-MCM} w LNP}(\\\\boldsymbol{x})$=$S_{\\\\mathrm{MCM}}(\\\\boldsymbol{x})$+ $\\\\mathcal{T_k^{\\\\mathrm{mean}}}(\\\\mathrm{exp}(\\\\mathrm{sim}(z_h^l, t_i)/ T)/(\\\\sum_{j=1}^C{\\\\mathrm{ exp}(\\\\mathrm{sim}(z_h^l, t_j) / T)}+\\\\sum_{j=1}^{N_{\\\\mathrm{neg}}}{\\\\mathrm{exp}(\\\\mathrm{sim}(z_h^l, \\\\hat{t}_j) / T)}))$\\n\\nand \\n$S_{\\\\mathrm{R-MCM} w/o LNP}(\\\\boldsymbol{x})$ =$S_{\\\\mathrm{MCM}}(\\\\boldsymbol{x})$+ $\\\\mathcal{T_k^{\\\\mathrm{mean}}}(\\\\mathrm{exp}(\\\\mathrm{sim}(z_h^\\\\mathrm{l}, t_i)/ T)/\\\\sum_{j=1}^C{\\\\mathrm{ exp}(\\\\mathrm{sim}(z_h^l, t_j) / T)})$\\n\\nWe hold the view that utilization of local prompts (enhance local information), local negative prompts (provide outlier knowledge) and both local-based metrics, i.e., ours w/o LNP and ours w LNP are all our contributions. We give a detailed comparison of the proposed method with LoCoOp in a progressive manner. If the reviewer wants to figure out the influence of the proposed method, the comparison should be within the first two lines below:\\n\\n\\n|method|iNaturalist||SUN||Places||Texture||Average||$\\\\Delta$|\\n|-|-|-|-|-|-|-|-|-|-|-|-|\\n||FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|\\n|LoCoOp_{GL}|21.67| 95.69 |22.98 |95.07 |31.41| 92.10 |49.79| 87.85 |31.46 |92.68||\\n|Ours_{GL}|9.69|97.80|26.27|94.22|34.78|91.33|36.63|91.65|26.84| 93.75|**4.62%**|\\n|Ours_{R-MCM}(w/o LNP)|8.91|97.88|23.78|94.98|32.94|92.02|35.76|91.79|25.34|94.16|**6.12%**|\\n|Ours_{R-MCM}(w LNP)|8.63|98.07|23.23|95.12|31.74|92.42|34.50|92.29|**24.52**|**94.48**|**6.94%**|\\n\\nThe first two lines indeed showcase the benefits from our local prompts with enhanced local information **(4.62% on FPR95)**. Once more, equipped with two kinds of OOD metrics proposed by our paper, we are able to further boost the performance **(2.32% on FPR95)**. Even without local negative prompts (serve as an ablation), the local prompts are already capable of improving OOD performance with fine local information. Therefore, we would say not only the improvements from utilizing LNP are the core contribution but also the local prompt with enhanced local information. We sincerely hope that the reviewer could have a better understanding of our core contribution from the comparison table above.\", \"a3\": \"We would like to make a clarification of the new visualizaition results. These samples are globally classified into an ID category with high confidence and therefore fail to be an OOD sample by previous global-based methods.\\n\\nWe can not fully understand the meaning of the reviewer. What is the meaning of the reviewer by mentioning \\\"samples that are correctly identified, but cannot be determined to be OOD\\\"? Does it mean 1) the samples are correctly classified into ID-categories globally or 2) correctly classified between ID and OOD? If the reviewer means the latter, there would be no doubts for \\\"but cannot be determined to be OOD\\\", so we assume the reviewer means the former. We would explain that OOD samples can not be classified correctly as there are no OOD categories in advance. **The only difference is whether it is assigned to an ID category with high confidence.** If it is, then it would be just the samples in new visualization results, i.e., global-based methods fail in some cases that are globally similar to certain ID categories. By contrast, our method successfully discriminates these circumstances with local information and treats it as OOD. Once one certain example is classified correctly, it means the sample is indeed an ID sample, and **it is determined not to be OOD**. We select three cases and all of them are outlier samples. They all **can not be classified correctly as no ID categories match their categories**. In other words, none of them can be correctly identified no matter globally or locally. However, when globally identified, they are incorrectly assigned to ID classes that are overall similar to, and no longer be considered as OOD. By contrast, our method with local enhancement successfully addresses the issue.\\n\\nWe are uncertain if we get the meaning of the reviewer. If our understanding differs from that of the reviewer, please feel free to let us know and we will make an instant explanation accordingly.\"}", "{\"title\": \"Additional Response to Reviewer GNZR\", \"comment\": \"We sincerely thank the reviewer for taking the time to view the rebuttal and making valuable suggestions to the paper. As for the two additional questions, we would like to make some further clarifications:\\n\\n#### **1. Adding more shots**\\nAs can be seen from the main table and the explanation in the first-round discussion, we would like to emphasize two key points: **1)** the phenomenon is commonly observed in various OOD detection methods, including but not limited to ID-like and LoCoOp, this part we have adequately showcase in the first round discussion. **2)** the phenomenon should be attributed to the characteristics of the two datasets.\\n\\nTo further support our point of view, we calculate the similarity of prompts between both OOD datasets and ID-dataset with different shots and observe the trends of change:\\n\\n\\n|Average similarity|4-shot|16-shot|\\n|-|-|-|\\n|ID-dataset|0.297|0.304|\\n|iNaturalist|0.264|0.255|\\n|SUN|0.282|0.281|\\n|Places|0.285|0.283|\\n|Texture|0.289|0.283|\\n\\nIt can be seen from the table that **1)** the similarity of the four OOD datasets are related to the OOD detection performance on them. Specifically, more similar to ID datasets (Texture) leads to poor performance, which is in line with the analysis and experimental results; **2)** as the number of shot increases, the discrimination between SUN and Places (0.01 and 0.02 decrease) is vaguer than that in iNaturalist and Texture (0.09 and 0.06 decrease, respectively). It partially proves that the gain brought by more shots from the two middle datasets is smaller. It is the characteristic of the two datasets, and is **irrelevant to the algorithm used for OOD detection**.\\n\\n\\nFrom the analysis above, we would like to explain that **we are not the only one that encounters the phenomenon**, and we **give possible explanation from our perspective of view** to showcase that the phenomenon does exist other than the bias brought by few-shot setting, which indicates the rationality of the phenomenon. As it is not the focus of our paper, **the effectiveness of our proposed method that our proposed method achieves consistent and substantial improvements should not be ignored**. The reviewer raises an interesting point and we will take investigation of balance between different kinds of datasets as future research points.\\n\\n\\nIf the reviewer still has concerns about the phenomenon, point out in detail which part of the explanation above you have questions, and we are pleased to make further explanation.\\n\\n#### **2. Improvement from negative local prompt**\", \"we_would_like_to_emphasize_that_negative_local_prompt_is_one_core_contribution_of_the_proposed_method_and_our_contributions_can_be_concluded_as\": \"**1)** utilization of local prompts (enhance local information), **2)** local negative prompts (provide outlier knowledge) and **3)** local based OOD metrics. We give a detailed comparison of the proposed method with LoCoOp in a progressive manner to demonstrate the effectiveness of each component.\\n\\n|method|iNaturalist||SUN||Places||Texture||Average||$\\\\Delta$|\\n|-|-|-|-|-|-|-|-|-|-|-|-|\\n||FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|\\n|LoCoOp_{GL}|21.67| 95.69 |22.98 |95.07 |31.41| 92.10 |49.79| 87.85 |31.46 |92.68||\\n|Ours_{GL}|9.69|97.80|26.27|94.22|34.78|91.33|36.63|91.65|26.84| 93.75|**4.62%**|\\n|Ours_{R-MCM}(w/o LNP)|8.91|97.88|23.78|94.98|32.94|92.02|35.76|91.79|25.34|94.16|**6.12%**|\\n|Ours_{R-MCM}(w LNP)|8.63|98.07|23.23|95.12|31.74|92.42|34.50|92.29|**24.52**|**94.48**|**6.94%**|\\n\\n\\n\\nWe summarize from the table that **1)** compared with LoCoOp (with the same OOD metric), we get a substantial improvement (**4.62%** FPR95), which firmly validates the effectiveness of local prompts; **2)** our method equipped with local and outlier enhancement further improves the performance by **2.32%** on FPR95. We shall say all these improvements are made by our local enhancement framework and should not be ignored.\\n\\n\\nLast but not least, please allow us to say that improvement on few-shot OOD detection is challenging. Given that we already substantially improve the performance by about 4.62% on FPR95 compared to LoCoOp with the same OOD metric, which is approximately the promotion from 4-shot to 16-shot, such **an improvement building on several methods that proposed by us, i.e., ours_{GL} to ours_{w LNP} should not be considered to be marginal**.\\n\\n\\nWe really appreciate the reviewer's patience and valuable time to view the rebuttal, and rates a positive points for the paper. We hope the additional response could address the remaining questions from the reviewer. We are more than pleased to discuss with the reviewer about any of the problems and discover some observations that are worth paying attention to.\"}", "{\"comment\": \"Thanks to the authors for further responses, and sorry for the last minute reply. Some of my questions have been successfully clarified.\\n\\n**W3**. In the first example in Fig. 6, the global classification result (\\u201cOcean\\u201d) correctly reflects the content of the image, and we can never say that the result is wrong. Given that the groundtruth of the image is \\\"icebreaker,\\\" which can be determined by focusing on a local region of the image, it is reasonable for the proposed method to classify the image as \\\"OOD.\\\" On the other hand, in the second example, the global classification result is \\\"Daisy,\\\" which is a clear misclassification. Due to the differences, I found the second (or third) example less convincing.\\n\\nRegarding the novelty of the proposed method to ID-Like, I have checked the responses and the comparative results provided to Reviewer dbE1. I think I understood the author's points. However, I still think that there are significant conceptual and technical overlaps between the two methods, which undermines the novelty of the proposed method.\"}", "{\"summary\": \"This work proposes to enhance Out-of-Distribution (OOD) detection through local prompt learning. Specifically, the method involves randomly cropping training images and identifying hard negatives by comparing their similarity to global prompts, which are provided in textual format. These hard negatives, along with the associated object samples, are then used to train local negative prompts.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written overall, but it lacks some crucial symbol definitions and implementation details.\\n2. The method addresses a significant task aimed at detecting OOD images using only a few images from the ID data. Extensive experiments demonstrate the effectiveness of the method.\", \"weaknesses\": \"1. Some crucial symbol definitions, such as T_k and \\u200b\\\\hat{t}_i , are not clearly explained.\\n2. Apart from replacing the maximum process with an average in Equation 3, what distinguishes the proposed method from [Miyai et al. 2023b]?\\n3. Although hard negative samples can serve as a form of OOD data, they primarily consist of cluttered backgrounds or parts of in-distribution (ID) objects lacking distinctive features. Essentially, they are quite different from true OOD classes. The work lacks an explanation of the underlying theories.\", \"questions\": \"1. How are the local prompts and negative local prompts initialized, and what are their dimensions?\\n2. Moreover, from lines 270 and 237, it appears that global and local prompts t_k share the same format, \\\"a photo of {class}.\\\" However, Figure 2 indicates that local prompts are trainable, which seems contradictory. Could the authors provide further clarification? Are the local prompts in Equation 4 simply using the same symbol t_k but representing different meanings? If so, using distinct symbols might be more appropriate.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the authors\\u2019 response. The replies have addressed most of my concerns.\"}", "{\"title\": \"Looking forward to your feedback\", \"comment\": \"Dear reviewer dbE1:\\n\\nWe respectfully appreciate again for your insightful and thoughtful comments! As the suggestions from the reviewer, we give thorough explanation and update the manuscripts accordingly.\\n\\nAs it is approaching the end of the discussion period (November 26 at 11:59 pm AoE) and we do not receive feedback, we sincerely hope you could look through our response and have a further comment at your convenience if you have any questions about the paper. We will do our best to address the issues of the reviewer.\\n\\n\\nBest wishes,\\n\\nSubmission 6694 Authors.\"}", "{\"title\": \"General response to all reviewers\", \"comment\": \"We are grateful to all reviewers for taking the time to review our work and highly recognize our contributions: \\\"idea is novel, shed light on an interesting perspective of local features in OOD detection\\\" (**dbE1**,**NiJR**,**GNZR**), \\\"the effectiveness of the method is extensively demonstrated\\\" (**KX2t**,**GNZR**), \\\"the paper is well-organized and easy to follow\\\" (**NiJR**,**KX2t**). We update the manuscript and highlight all the changes from the feedback of the reviewers with blue color. We will respond to each reviewer respectively and try our best to address the concerns of the reviewers.\"}", "{\"title\": \"Additional Response to Reviewer NiJR for Q4\", \"comment\": \"A4: We thank the reviewer's kind suggestion and make a clarification of the range in the updated manuscript. As for the influence of the loss term, we follow the suggestion from the reviewer to conduct a sufficiently wide range of $\\\\lambda_{neg}$ (**where some extreme cases are rarely considered in practical applications because they are too unbalanced.**)\\n\\n|method|iNaturalist||SUN||Places||Texture||\\n|-|-|-|-|-|-|-|-|-|\\n|$\\\\lambda_{neg}$|FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC\\n|0.1|17.64|95.97|27.18|94.76|34.45|91.41|48.82|86.89|\\n|1|9.92| 97.80| 25.04| 94.86| 32.83| 92.02| 48.90 |88.79 \\n|10|10.19| 97.83| 23.46| 95.26| 31.59| 92.31| 49.57| 88.81\\n|50|34.09|92.06|42.34|88.63|50.88|83.52|62.47|85.70|47.44|\\n\\n\\nIt can be seen that the coefficient indeed works, with performance drop under extreme cases ($\\\\lambda_{neg}=0.1$ and $50$). For $\\\\lambda_{neg}=50$, the loss for local prompts is neglected, resulting in severe performance drop. For $\\\\lambda_{neg}=0.1$, the local negative prompts are over-regularized by diversity constraints. The effectiveness of the proposed loss can be further verified accordingly. We thank the reviewer for the remind and revise the statement in the corresponding paragraph.\\n\\n\\nAs for the comparison with ID-like, we provide comprehensive analysis from the perspective of motivation and empirical experiment just as stated in response to reviewer dbE1. We believe that the core contribution of our method **lies in the utilization of local information with outlier knowledge**. Despite employing random crop, ID-like still concentrates on OOD detection from the perspective of global information. By contrast, all the module and regional loss function of our method are designed to meet the demand for incorporating local information. In fact, the **first visualization in A3 is from misclassification of ID-like in its paper**. It strongly demonstrates the effectiveness of our method in discriminating OOD samples with subtle differences in certain regions which global-based methods cannot. Empirically, we show that our method achieves substantial **3.78% improvements regarding FPR**. Moreover, the unique extensibility of our method can further **boost the performance with the aid of any global-based method**, and again achieves **impressive 6.41% improvements on FPR95**.\\n\\nLast but not least, please allow us to explain that improving OOD performance is challenging, and we demonstrate empirically that promoting 2% of FPR95 performance approximately equals to promotion from 4-shot to 16-shot. Despite the challenges ahead, we still **get state-of-the-art results after comprehensive comparison with all existing methods**, strongly demonstrating the effectiveness of the entire pipeline of the proposed method. Moreover, our unique implementation from the perspective of local prompt enhancement equips us with the ability to further **integrate well-trained global prompts from all existing methods**, which means we are always able to enhance OOD detection in a way like \\\"standing on the shoulders of giants\\\". We believe that the local enhancement could serve as a direction that could be well explored to make a breakthrough in OOD detection.\\n\\n\\nWe are sincerely grateful that the reviewer takes the time to view the rebuttal, and we respond to each of the remaining issues respectively. We hope the additional explanation and experimental results can address the problems. If the reviewer still has concerns, let us know in detail and we will make a further explanation.\"}", "{\"comment\": \"I would like to thank the authors for their detailed responses. The responses addressed some of my concerns, so I would like to retain my original rating. I still have the following concerns.\\n\\n**W2**. In Table 2 newly presented in W2, the differences between Ours w/o LNP and Ours w/ LNP are all less than 1.0 point. For example, given that LoCoOp shows an average improvement of more than 2.0 points over its competitors based on its core technical idea, I feel that the improvements made by the proposed method are rather marginal. In addition, I would say Fig. 4 should be replaced by a table to quantify the differences between the methods.\\n\\n**W3**. I would like to thank the authors for providing new visualization results. However, at least two of the examples are due to clear misclassification of objects. I rather feel that it would be preferable if examples were those that are correctly identified even when viewed globally, but cannot be determined to be OOD unless viewed locally.\\n\\n**W4**. The authors' response is understandable. However, if so it should be clearly stated in the text that \\\"at least in the entire ablation process of our practical experiment.\\\" Actually the authors' response does not really address my point. The key of the proposed method is in the loss term. If the weight $\\\\lambda_{\\\\text{neg}}$ for the loss term does not change the final performance, it could negate the impact of the proposed method. To dispel this contradiction, the analysis should be performed for a sufficiently wide range of $\\\\lambda_{\\\\text{neg}}$ for which the contribution to performance becomes clear.\\n\\nLast but not least, reading the fellow reviewer's comments, I agree with Reviewer dbE1's comment that the novelty of the proposed method is somewhat diminished because it is similar to the existing methods such as ID-like that use randomly cropped regions to supervise positive and negative prompt learning.\"}", "{\"title\": \"Author Response to Reviewer KX2t for Q3-Q5\", \"comment\": \"#### **Q3: Explanation of hard negative samples.**\", \"a3\": \"Actually, what counts for OOD detection is that the space **is not ID regions**, so separating ID and OOD regions is of great significance in the method, and employing hard negative samples perfectly suits the requirement as carefully selected OOD data may not contribute to separating ID and OOD regions in the scene. **Theoretically**, hard negative samples have the most similar distribution with ID samples than real outliers. Once our model is trained on hard-negative samples, it naturally gains the ability to discriminate outlier samples as detecting hard negative samples (similar to ID samples) is harder than real outliers. We emphasize that the goal is not easy to realize, as we carefully design both local prompts to learn local information of ID categories, negative local prompts to simulate potential outliers, and use a diversity regularization to make negative local prompts cover more unseen classes. All the analysis above strongly demonstrates the effectiveness of the proposed method.\\n\\nTo support our point of view,\\n**1)** **Quantitatively**, we conduct experiments about the number of negative local prompts in Table.7 and find that local negative prompt is well sufficient to achieve good results, indicating that the local negative prompts represent features of potential outliers well. We additionally carry out experiments to calculate the similarity of both ID-dataset, hard negative samples, and OOD datasets with local prompts and list qualitative results as follows: \\n\\n||Average similarity|\\n|-|-|\\n|w/o local enhancement|\\n|ID-datasets|0.297|\\n|hard negative samples|0.292|\\n|OOD-datasets|0.280|\\n|w local enhancement (ours)|\\n|ID-datasets|0.312|\\n|hard negative samples|0.281|\\n|OOD-datasets|0.273|\\n\\nIt can be seen in the table that without local enhancement, the model fails to discriminate between ID-datasets and hard negative samples, and a large margin exists with OOD-datasets. After local enhancement, the difference between ID-datasets and hard negative samples obviously improves, and hard negative samples share similar similarity with OOD-datasets. The results demonstrate the effectiveness of hard negative samples and the rationality for hard negative samples serving as a form of outlier samples.\\n\\n**2)** **Qualitatively**, we visualize both local negative prompts in Fig.5 and attention regions of local prompts in Fig.9. It is vividly shown in these visualizations that the negative local prompt is discriminative about potential outliers in the scene (Fig.5), and pushing away from these potential outliers (e.g., background) is helpful for the model to concentrate on ID regions (Fig.9) and therefore improve the performance.\\n\\nWe sincerely hope our analysis could help the reviewer have a better understanding of the strength of the proposed method and the rationality for hard negative samples serving as a form of outlier samples. If you still have questions, point out in detail and we will try our best to make it more clear.\\n\\n#### **Q4: Initialization of local prompts and negative local prompts.**\", \"a4\": \"The global prompts are hand-crafted prompts \\\"a photo of {class}\\\" (line237) and set frozen during training and evaluation. The local prompts are initialized with embeddings of \\\"{learnable prefix}+{class}\\\" (line264) and optimized during training. Negative local prompts are initialized randomly due to lack of real outlier information and diversity regularization $\\\\mathcal{L}_{reg}$ guarantees that negative local prompts cover more unseen spaces. Intuitive explanation can also be found in diagram Fig.2. Their dimensions are the same as the dimension of text embeddings, which is 512 in our paper.\\n\\n#### **Q5: Further clarification of prompts.**\", \"a5\": \"We would like to make an explanation that $t_c$, $t$ and $\\\\hat{t}$ stand for features of global, local and negative local prompts in our paper (defined in line 237, 264 and 265, respectively). As explained in Q4, global prompts are hand-crafted prompts in the form of \\\"a photo of {class}\\\" and set frozen. In practical application, local prompts are initialized with embeddings of \\\"{learnable prefix}+{class}\\\" and \\\"learnable prefix\\\" are learned in end-to-end optimization with the weighted sum of loss function. Local prompts in Eqn.4 share the same meaning with the symbol $t_k$ (k as the subscript from 1 to number of category). Intuitive explanation can also be found in diagram Fig.2. We thank the kind remind from the reviewer and make a clear explanation in the updated manuscript. If the reviewer has any further questions about the notation, point it out and we will make a clear and detailed explanation.\\n\\n[1] LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning. NeurIPS 2023.\\n\\n[2] Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models.\"}", "{\"title\": \"Looking forward to your feedback\", \"comment\": \"Dear reviewer KX2t:\\n\\nWe respectfully appreciate again for your insightful and thoughtful comments! As the suggestions from the reviewer, we give thorough explanation and update the manuscripts accordingly.\\n\\nAs it is approaching the end of the discussion period (November 26 at 11:59 pm AoE) and we do not receive feedback, we sincerely hope you could look through our response and have a further comment at your convenience if you have any questions about the paper. We will do our best to address the issues of the reviewer.\\n\\nBest wishes,\\n\\nSubmission 6694 Authors.\"}", "{\"title\": \"Author Response to Reviewer dbE1 for Q2\", \"comment\": \"#### **Q2: Results using Regional OOD score.**\", \"a2\": \"Following the suggestion of the reviewer, we additionally conduct experiment of OOD score solely dependent on Regional OOD score. Results are shown in table below.\\n\\n|method|iNaturalist||SUN||Places||Texture||\\n|-|-|-|-|-|-|-|-|-|\\n||FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC\\n|MCM|30.91| 94.61| 37.59| 92.57| 44.69| 89.77| 57.77| 86.11|\\n|Ours (Regional)|39.17|88.04| 67.06|82.47|60.13|79.61|80.96|67.35|\\n|Ours (Global)|27.70|94.66|31.30|93.68|39.50|90.86|38.61|91.79|\\n|Ours (Global+Regional)| **9.65**| **97.87** |**20.40**|**95.57**|**29.39**| **92.67**| **51.20**| **88.00**|\\n\\nFrom the table, it can be concluded that **1)** Regional OOD score can not achieve satisfying results in our experiment and must rely on global score; **2)** the proposed R-MCM achieves the best results with respect to all metrics. This is in line with analysis that global prompt/score provides **global and fundamental** discrimination between ID and normal OOD samples (Sec.4.1) and local prompts/score provides **fine** discrimination results between ID and hard OOD samples (Sec.4.2). Consequently, it demonstrates that our proposed score integrating gliobal and local information outperforms the strategy that solely relies on the Regional OOD score. We kindly highlight that we aim to propose a novel approach to enhance OOD detection performance from the perspective of enhancing local information but not to demonstrate that local prompts are better/more important than global prompts and can replace their position.\\n\\n\\nWe would appreciate it if our explanation could address the review's concerns. If you have any other questions about the method or the experiment, reply to us and we will try our best to address your confusion.\\n\\n\\n[1] ID-like Prompt Learning for Few-Shot Out-of-Distribution Detection. CVPR 2024.\\n\\n[2] Negative Label Guided OOD Detection with Pretrained Vision-Language Models. ICLR 2024.\\n\\n[3] Deep Anomaly Detection with Outlier Exposure. ICLR 2019.\\n\\n[4] Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models.\"}", "{\"title\": \"Final Rating\", \"comment\": \"I think the authors addressed most of my concerns and I would like to stick to my previous rating of marginally above acceptance. It has some interesting observations.\", \"couple_of_comments\": [\"I don't think I understand authors' point of adding more shots causing more confusion for OOD detection, because intuitively having more shots should make the model more confident about in distribution data.\", \"The improvement from negative local prompt seems to be marginal, despite being a key selling point of the paper. Is there any intuition why?\"]}", "{\"title\": \"Author Response to Reviewer NiJR for Q1-Q3\", \"comment\": \"We appreciate the reviewer for the valuable suggestions to our work. We reply to them sequentially and will complement in the revised version.\\n#### **Q1: Standard deviation of metrics.**\", \"a1\": \"All of the results are averaged for three runs and we do not observe obvious fluctuations as the variance shown in the table below. It showcases that the improvements are stable and do not attribute to the randomness. We speculate the reason is that background information of the images is rich enough to capture local outlier knowledge and thus achieve stable results, which are validated by visualization in Fig.4. Moreover, we would like to emphasize that few-shot OOD detection is challenging and the improvement is not small, which is shown in table in Q2 that our method obtains 22.04% (FPR95 )and 2.0% (AUROC) relative gain compared with LoCoOp.\\n\\n\\n|method|iNaturalist||SUN||Places||Texture||\\n|-|-|-|-|-|-|-|-|-|\\n||FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|\\n|Ours (4 shot)|12.81\\u00b10.38|97.29\\u00b10.18|19.34\\u00b10.36|95.85\\u00b10.17|27.53\\u00b10.51|92.97\\u00b10.37|45.51\\u00b10.75|89.99\\u00b10.36|\\n|Ours (16 shot)|8.63\\u00b10.29|98.07\\u00b10.05|23.23\\u00b10.24|95.12\\u00b10.08|31.74\\u00b10.34|92.42\\u00b10.16|34.50\\u00b10.62|92.29\\u00b10.24|\\n\\n\\n#### **Q2: Accuracy differences between w/ LNP and w/o LNP.**\", \"a2\": \"We showcase the detailed results of w/ LNP and w/o LNP of 16 shot in the table below. We would like to highlight that the benefit margin is gradually decreasing and the improvement is actually not small. We take the improvements from 4-shot to 16-shot as a reference. Compared to w/o LNP, ours with LNP achieves **0.82** (**2.62%** relatively) and **0.32** (**0.4%** relatively) gain with respect to FPR and AUROC, respectively. By contrast, the improvements from 4 shot to 16 shot is 1.77 and 0.45, respectively as a reference. From the results, it can be seen that the improvement is not small, therefore demonstrating the effectiveness of the proposed LNP OOD metric.\\n\\n\\n|method|iNaturalist||SUN||Places||Texture||Average||$\\\\Delta$||\\n|-|-|-|-|-|-|-|-|-|-|-|-|-|\\n||FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|\\n|LoCoOp|21.67|95.69 |22.98| 95.07| 31.41| 92.10 |49.79| 87.85| 31.46 |92.68|-|-|\\n|Ours (w/o LNP)|8.91|97.88|23.78|94.98|32.94|92.02|35.76|91.79|25.34|94.16|19.42%|1.6%|\\n|Ours (w LNP)|8.63|98.07|23.23|95.12|31.74|92.42|34.50|92.29|**24.52**|**94.48**|**22.04%**|**2.0%**|\\n|Ours (4 shot )| 12.81| 97.29 |19.34 |95.85| 27.53| 92.97 |45.51| 89.99| 26.29 |94.03|\\n\\n#### **Q3: Examples of OOD images.**\", \"a3\": \"We appreciate the reviewer for detail and thorough understanding of the proposed method. As the reviewer suggests, we showcase several examples where global based method fails to detect hard OOD samples, and add them in the updated manuscript (**line 525 and Fig.6**). For example, in the first image, previous global-based method fails to detect the outlier as it is similar to ID category ocean liner on the whole, with only subtle difference mechanical devices on the deck indicating that it is actually an icebreaker (also demonstrated by the iceberg next to it). The same phenomenon are also observed other examples (the shape of the sunflower center in the second image and the calyx of the apple in the third image). From the examples, it is vividly illustrated in the example that global-based method focuses on overall representations and fails in hard outliers with subtle difference in regions. By contrast, our method enhances local information and successfully solves the problem by outlier knowledge, qualitatively showcasing the superiority and effectiveness of our method.\"}", "{\"metareview\": \"This work aims to enhance few-shot tuning for OOD detection by complementing global prompts with regional enhancements through local prompts. To achieve this, the authors introduce two key modules: (1) negative augmentation to leverage local outlier knowledge, and (2) local prompt-enhanced region regularization to effectively capture local information. The extensive results have demonstrated the effectiveness of the proposed approach. Several reviewers acknowledged the novelty of the proposed method (dbE1, NiJR, GNZR), its effectiveness in improving OOD detection (KX2t, GNZR), and the clarity of the paper\\u2019s organization, which facilitates understanding. However, reviewers raised a number of concerns, including the need for additional comparisons with similar methods (e.g., ID-like, NegLabel) (dbE1, KX2t), further explanations and justifications regarding the proposed approach (KX2t, GNZR), and more ablation studies (e.g., w/ or w/o LNP) (dbE1, NiJR, GNZR). Additional concerns were noted regarding hyperparameter influence analysis (i.e., $\\\\lambda_{neg}$ and $\\\\lambda_{reg}$) (NiJR), limited performance gains (dbE1, NiJR), and performance degradation in few-shot scenarios (e.g., declining performance from 4-shot to 8-shot or higher) (NiJR, GNZR). The authors provided detailed responses and effectively addressed most of these concerns. As a result, all reviewers ultimately gave positive ratings, leading to an average score of 6.0. We decide to accept the paper. Additionally, the authors are encouraged to further refine the manuscript in accordance with the reviewers' suggestions to strengthen the final version.\", \"additional_comments_on_reviewer_discussion\": \"During the discussions, the authors effectively addressed the concerns raised by reviewers NiJR and KX2t, leading both to increase their scores. Reviewer GNZR maintained some reservations regarding performance degradation with additional shots and the limited performance gains observed. The authors provided follow-up responses to further clarify and mitigate these concerns. Reviewer dbE1 did not participate in the discussion phase. Overall, all reviewers ultimately gave positive ratings, and the authors\\u2019 detailed and thorough responses successfully addressed the majority of the feedback. Based on these factors, I agree that the proposed method demonstrates sufficient merit for publication.\"}", "{\"summary\": \"The authors propose an approach to Out-of-Distribution (OOD) detection in Vision Language Models through few-shot fine-tuning. The paper uses random crops from the images to generate negative global prompts, which are used to guide the learning of learnable local prompts aimed at detecting local or regional outliers. They also propose a metric for quantifying the OOD scores that takes into account local or regional information.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"I think the paper sheds light on an interesting perspective regarding the role of local or regional features in OOD detection. The idea of randomly cropping the image and creating negative global samples based on image-text similarity scores, while simple, is both effective and clever. The authors have empirically demonstrated the effectiveness of learning local prompts, achieving strong performance.\", \"weaknesses\": \"I am unsure about the justification for using few-shot fine-tuning. I do not understand what the authors meant in lines 46-48. Having access to the full dataset for a class can provide a better estimate of the outliers. Identifying outliers with only a few examples, while challenging, may be biased toward the examples of the class being used, especially if the number of shots is low and one of the examples is an outlier. The reason I am focusing on this is that the results in Table 1 seem counterintuitive to me. It appears that OOD detection worsens with 16 shots on the SUN and Places datasets, where we would expect the model to identify OODs better with more examples.\\n\\nThere also seems to be a lack of ablation studies regarding the model's performance without the learnable negative local prompts. I believe this ablation is important.\\n\\nI have a comment about Figure 2. It mentions local features of augmented inputs, which I believe refers to the image patches from the randomly cropped images. However, from the figure, it appears that they are only taking patches from the entire image, which is confusing.\", \"questions\": \"I kindly request that the authors clarify the motivation behind few-shot tuning and address the discrepancy between few-shot and many-shot performance in Table 1, as highlighted in the weaknesses section. I believe an ablation study on the impact of additional negative local prompts is necessary to justify their use. Additionally, I would appreciate it if the authors could provide a clearer explanation of the image in Figure 2.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response to Reviewer GNZR for Q1-Q2\", \"comment\": \"We appreciate the reviewer for the valuable comments and appropriately respond to them as follows. We will add the suggested experiments and illustrations in the revised version.\\n\\n#### **Q1: Justification for using few-shot fine-tuning.**\", \"a1\": \"Few-shot OOD detection is proposed by LoCoOp[3]. We agree with their settings and introduce them briefly in the introduction. Here we provide the explanation in detail.\\n\\n\\\"On the one hand, the zero-shot methods do not require any training data, but they may encounter a domain gap with ID downstream data, which limits the performance of zero-shot methods[4]. On the other hand, fully supervised methods utilize the entire ID training data but may destroy the rich representations of CLIP by fine-tuning, which also limits the performance despite requiring enormous training costs. To overcome the limitations of fully supervised and zero-shot methods, it is essential to develop a few-shot OOD detection method that utilizes a few ID training images for OOD detection. By finding a balance between the zero-shot and fine-tuned methods, few-shot OOD detection has the potential to provide a more efficient and effective solution for OOD detection.\\\"\\n\\nThe effectiveness and efficiency have been demonstrated by previous works[3,5], and few-shot OOD detection has the potential to get competitive results against full-tuning methods[6,7]\\n\\nFor the reviewer's concern that the performance may be biased toward the examples of the class being used, in the paper, we conduct standard processing of few-shot selection following existing methods in both representation learning[1,2] and OOD detection[3], and select few-shot samples using exactly the same procedure. Therefore, all comparisons are conducted under the same setting. As for the performance, we are not sure about the random seed used by previous works. In our work, all of the results are averaged for three runs and we do not observe obvious fluctuations. \\n\\nWe hope the analysis could help the reviewer have a better understanding of few-shot OOD detection. If you still have concerns or other questions, respond to us and we will make in every effort to address your concerns.\\n\\n\\n#### **Q2: Few-shot and many-shot performance.**\", \"a2\": \"For OOD detection performance on SUN and Places datasets, on the one hand, the phenomenon is common under the setting of few-shot OOD detection, with **similar trend is observed** in ID-like[5] (1-shot and 4-shot in Places, shown in table below) and LoCoOp[4] (4 shot and 16 shot in SUN and Places shown in Table.1). Possible reasons for the phonomenon are: **1)** the ability of few-shot learning for effectively gaining outlier knowledge and enhancing OOD detection performance with scarce samples, certificating the utility; **2)** Moreover, for few-shot OOD detection, more samples may not necessarily bring better performance, and incorporating more samples may confuse the discrimination, which should **be the characteristic of the two datasets**.\\n\\n|method|iNaturalist||SUN||Places||Texture||\\n|-|-|-|-|-|-|-|-|-|\\n||FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC\\n|ID-like (1 shot)|14.57 |97.35|44.02 |91.08| 41.74 |91.15 |26.77| 94.38|\\n|ID-like (4 shot)| 8.98 |98.19| 42.03 |91.64| 44.00| 90.57 |25.27 |94.32| 26.08| 94.36|\"}", "{\"title\": \"Looking forward to your feedback\", \"comment\": \"Dear reviewer GNZR:\\n\\nWe respectfully appreciate again for your insightful and thoughtful comments! As the suggestions from the reviewer, we give thorough explanation and update the manuscripts accordingly.\\n\\nAs it is approaching the end of the discussion period (November 26 at 11:59 pm AoE) and we do not receive feedback, we sincerely hope you could look through our response and have a further comment at your convenience if you have any questions about the paper. We will do our best to address the issues of the reviewer.\\n\\nBest wishes,\\n\\nSubmission 6694 Authors.\"}", "{\"title\": \"Author Response to Reviewer dbE1 for Q1(2/2)\", \"comment\": \"**2) NegLabel**\\n\\nNeglabel[2] designs a novel scheme for the OOD score with negative labels. It leverages **real outlier information** with negative labels from extensive corpus databases, which we discuss in line38-41. This kind of knowledge helps to a great extent pointed out by OE[3] and is inconsistent with real-world application, where negative categories are infinite. As we have **no access to any true outlier information**, the comparison is unfair. Moreover, while all of the mentioned works mainly focus on negative prompts, we highlight that we concentrate on the connection between global and local, and negative prompt is a small part of our method. By contrast, the advantage of our method is that it incorporates local information with pseudo local outliers and finds a novel way to take local information into both training and OOD score calculation. As the reviewer suggests, we provide comparison and analysis shown below:\\n\\n|method|iNaturalist||SUN||Places||Texture||Average||\\n|-|-|-|-|-|-|-|-|-|-|-\\n||FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC\\n|NegLabel (w outlier)|1.91|99.49|20.53|95.49|35.59|91.64|43.56|90.22|25.40|94.21|\\n|Ours (w/o outlier)|8.63|98.07|23.23|95.12|31.74|92.42|34.50|92.29|**24.52**|**94.48**|\\n\\nWe observe that **1)** although without outlier, our model achieves better average performance against NegLabel, demonstrating the effectiveness of our local information refinement strategy; **2)** NegLabel has an obvious overfitting in iNaturalist and SUN, which mainly consist of natural scenery. These datasets share different data distributions with images like Texture, which are detailed texture images. Consequently, our method achieves better balance between different kinds of OOD datasets, which strongly validates the effectiveness of incorporating local information and strengthens its application in diverse and infinite read-world scenarios.\\n\\nWe include these two relevant works in the revised version (line 426-447). We hope the comparison and analysis would help reviewer have a better understanding of the unique contributions and advancements of the proposed method.\\n\\n\\nWe would like to emphasize that the core is to better utilize local outlier information to strengthen the ability of OOD detection, which has not been explored before, and some kind of **negative prompts** are just the way to achieve it. Moreover, our method **can well adapt to all global prompt-based methods (see the experiments in Table.1 and above)** and integrating with them achieves better performance, which strongly demonstrates the potential and generalization ability. Despite the similarity on the surface, the starting point is essentially different. We hope the analysis above can help the reviewer have a better understanding of the strengths and unique contributions of our paper. We are grateful that the reviewer points out the related work and we update in the revised manuscripts.\"}", "{\"title\": \"Author Response to Reviewer GNZR for Q3-Q4\", \"comment\": \"#### **Q3: Performance without learnable negative local prompts.**\", \"a3\": \"As the negative local prompts represent potential outlier knowledge, they are randomly initialized since we have no access to outlier samples during training, therefore it can not be replaced with hand-crafted prompts in the method. So we assume the reviewer hopes to remove the negative local prompts along with hard-negative samples (which are used to optimize the negative local prompts). The results are shown in Fig.4 termed as w/o LNP. We showcase the detailed results as follows:\\n\\n|method|iNaturalist||SUN||Places||Texture||Average||\\n|-|-|-|-|-|-|-|-|-|-|-|\\n||FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|\\n|Ours (w/o LNP)|8.91|97.88|23.78|94.98|32.94|92.02|35.76|91.79|25.34|94.16|\\n|Ours (w LNP)|8.63|98.07|23.23|95.12|31.74|92.42|34.50|92.29|**24.52**|**94.48**|\\n\\nIt can be seen that with negative local prompts, the model gets consistent and substantial improvements, showcasing the effectiveness of negative local prompts for enhancing local outlier knowledge with respect to few-shot OOD detection.\\n\\nAs for the qualitative effectiveness of the negative local prompts, we provide additional visualization in Fig.5 to demonstrate that they actually learn outlier knowledge in different scenarios. \\n\\n#### **Q4: Question about Figure.2 .**\", \"a4\": \"We sincerely appreciate your raising this point. First, we clarify that augmented inputs indeed **refer to the image patches from the randomly cropped images (marked with yellow in the top diagram of Fig.2)**, which is in line with your understanding. In the bottom diagram of Fig.2, we take one cropped image as an example, and the orange patches are local features from the same augmented image. For loss calculation and evaluation, the most related regions (patches after regional selection in the diagram) are selected for enhancing OOD detection performance. We are grateful for the reviewer's rigorous reading, and hope the explanation could address your concerns.\\n\\nWe are truly grateful for the reviewer's interest in our methods with detailed reviewing, and hope the explanation could address your concerns. If you have any further questions, reply to us at your convenience and we will try our best to address your concern.\\n\\n\\n\\n[1] Learning to Prompt for Vision-Language Models. IJCV 2022.\\n\\n[2] Conditional Prompt Learning for Vision-Language Models. CVPR 2022.\\n\\n[3] LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning. NeurIPS 2023.\\n\\n[4] Delving into out-of-distribution detection with vision-language representations. NeurIPS 2022.\\n\\n[5] ID-like Prompt Learning for Few-Shot Out-of-Distribution Detection. CVPR 2024.\\n\\n[6] Energy-based Out-of-distribution Detection. NeurIPS 2020.\\n\\n[7] Non-Parametric Outlier Synthesis. ICLR 2023.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Dear reviewer NiJR:\\n\\nWe would like to express our gratitude for your kind response and make further explanation about your concerns. We will explain them sequentially.\\n\\nWith respect to W3, we would like to emphasize that correctly classification with regional difference is just **one of the reasons that cause wrong OOD detection results, and other type of wrong OOD detection should not be ignored**. We understand the meaning of the reviewer that the OOD detection fails mainly because ocean liner and icebreaker both belong to \\\"ship\\\" in some way (kind reminder that ocean liner is acutally a wrong result because the icebreaker cannot serve as a passenger ship). In other words, previous method fails in **this kind of outliers** because they share similar global meaning. \\n\\nHowever, **it is not the only way that previous global-based OOD detection method fails**. It also indeed comes from the misclassification where ID and OOD are also with subtle differences in certain regions. This kind of wrong detection is also challenging and by no means easier than the first type. For example, in the third example, the ID (Rose hip) is almost the same as OOD (Apple) and is **hard to discriminate even from the perspective of local information**. It should not be dismissed that the difficulty in discriminating them is as hard as the first type of example and that ID and OOD samples **do not necessarily belong to similar parent class**. Here we just want to cover more types of sources that cause wrong OOD detection results. We thank the reviewer's suggestion and will provide more visualizations in the updated manuscripts.\\n\\nWith the help of visualizations, we also would like to further illustrate the uniqueness of our method. The first example that the reviewer agrees to be persuasive **just comes from the visualization of the original paper** (we are unable to provide an external link but we sincerely wish the reviewer to look through it at the third example of Fig.7 on page 8). We believe it firmly demonstrates that the best global-based OOD detection method can only concentrate on outlier samples with overall differences and fails in hard samples that only have subtle differences in certain regions, whichi is the core issue that we aim to address.\\n\\nTherefore, we would like to express that the **core motivation and contribution of our method lies in the utilization of refine local information**, and the training pipeline and design of loss function to achieve this goal, which is not explored before. The motivation and starting point of the two method is totally different, and the so-called \\\"negative samples\\\" is just the way to generate pseudo samples. Empirically, we adequately demonstrate that our local-based method achieves competitive results and further gains substantial improvements with no doubts. We sincerely hope that our explanation could give the reviewer a better understanding that **1) outlier information from local perspective and 2) extensibility to integrating global prompts to get further improvements** is the core contribution of our paper. We will add more explanation of the comparison with the method in the updated manuscript.\"}", "{\"title\": \"Autrhor Response to Reviewer dbE1 for Q1(1/2)\", \"comment\": \"We thank the reviewer for the constructive comments and respond to them appropriately as follows.\\n\\n#### **Q1: Analysis of related works.**\", \"a1\": \"As the reviewer suggests, we compare with related work ID-like prompt and NegPrompt and respectively show the detailed analysis.\\n\\n**1) ID-like** \\n\\nWe conduct thorough comparison with ID-like[1]. First, **from the perspective of motivation**, our method differs from ID-like in that ID-like still focuses on global prompts optimization, with random crop generating ID-like samples. As it treats the whole image overall, it can not solve the problem of hard OOD samples with subtle differences in certain regions as described in line52-53. By contrast, our method optimizes local prompts and can well integrate with ID-like to further enhance the performance. **Empirically**, we compare OOD detection performance and showcase the results in the table below. \\n\\n|method|iNaturalist||SUN||Places||Texture||Average||\\n|-|-|-|-|-|-|-|-|-|-|-|\\n||FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC\\n|ID-like (global prompt optimized)|8.98 |98.19 |42.03 |91.64|44.00 |90.57 |25.27| 94.32| 30.07| 93.68\\n|Ours (local prompt optimized)|12.81|97.29|19.34|95.85|27.53|92.97|45.51|89.99|**26.29** |**94.03**|\\n|Ours+ID-like| 8.19|98.58| 19.77| 95.01| 28.92| 93.01|37.76| 91.57|**23.66**|**94.54**|\", \"it_can_be_concluded_that\": \"**1)** our model achieves better average performance against ID-like, demonstrating the utility of our local information enhancement strategy; **2)** ID-like obviously performs poorly in SUN and Places, which mainly consist of scenery with diverse scenarios; **3)** integrating the global prompt into our model further enhances the performance as shown in the last row. We attribute it to the advantage that it brings well-trained global prompts that better tailor for OOD detection rather than the hand-crafted template \\\"a photo of {class}\\\". It is helpful to our method in distinguishing outlier samples that have overall-dominant features, further showcasing unique advantage of local prompts and the potential of local prompt optimization as an orthogonal direction for OOD detection.\\n\\nWe additionally showcase the examples that are not detected by previous global based method in Fig.6. By contrast, the subtle differences can be discriminated by our method, which further demonstrates the effectiveness of our method.\"}", "{\"comment\": \"Dear reviewer dbE1:\\n\\nWe sincerely appreciate again for taking the time to provide valuable suggestions for our work! As the suggestions from the reviewer, we give thorough comparison with the mentioned papers to demonstrate the strength of the paper both theoretically and empirically and and update the manuscripts accordingly (Sec.5.1). We additionally carry out experiments to validate the effectiveness of the proposed OOD metric (shown in the first-round discussion).\\n\\nAs it is approaching the end of the discussion period and we do not receive feedback, we are not sure if our responses address the concerns from the reviewer. We sincerely hope you could look through our response. We would be grateful if you could have a further comment at your convenience. If you have any questions about the paper, We will do our best to address them.\\n\\nBest wishes,\\n\\nSubmission 6694 Authors\"}", "{\"comment\": \"We thank the reviewer KX2t for the time and effort in reviewing our rebuttal and we sincerely appreciate the reviewer for thorough and detailed review of our paper.\\n\\nWe are delighted to see our paper highly recognized by the reviewer and our response addresses the concerns of the reviewer and allows the reviewer to raise the rating about the assessment of the paper! We are more than happy to answer any concerns or questions the reviewer might still hold during the discussion period. Please do not hesitate to let us know!\"}", "{\"summary\": \"The proposed work focuses on enhancing the model's ability to detect local features by training local feature prompts using localized image information. Furthermore, based on these local feature prompts, this paper introduce a new OOD score that integrates with MCM Score.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tLocal image information is utilized to train local feature prompts, enhancing the model's ability to detect local features.\\n\\n2.\\tBased on local feature prompts, an OOD metric combining MCM has been proposed, along with a new ID classification metric.\", \"weaknesses\": \"1.\\tIn the latest works related to OOD detection, there are several studies similar to the method in this paper, such as \\\"ID-like (ID-like Prompt Learning for Few-Shot Out-of-Distribution Detection)\\\" and \\\"NegLabel (NEGATIVE LABEL GUIDED OOD DETECTION WITH PRETRAINED VISION-LANGUAGE MODELS).\\\" The structure of this paper is similar to that of ID-like, as it also employs random crop combined with CLIP to construct ID and OOD data, and utilizes positive prompts and negative prompts. However, the paper lacks a comparison and analysis of strengths and weaknesses with the ID-like paper. It is recommended that the authors add relevant discussions to enhance the depth and breadth of the paper.\\n\\n2.\\tThe OOD Score strategy proposed in this paper presents results for SMCM and SR-MCM but lacks results that only use the Regional OOD score and corresponding ablation experiments. This makes it difficult to effectively demonstrate that the SR-MCM outperforms the strategy that solely relies on the Regional OOD score. It is suggested that the authors include this part of the experiment to strengthen the paper's persuasiveness and the comparability of the results.\", \"questions\": \"Please refer to the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response to Reviewer KX2t for Q1-Q2\", \"comment\": \"We appreciate the reviewer for the valuable suggestions and appropriately respond to them as follows. We will add the suggested experiments and explanations in the revised version.\\n\\n#### **Q1: Definitions of crucial symbol.**\", \"a1\": \"We thank the reviewer's kind reminder, $\\\\mathcal{T_k}(x)$ is the sum of $\\\\textit{k}$ largest elements in $x$ (line184). In our paper, the shape of $x$ is the number of local tokens/features. $\\\\hat{t_i}$ is the notation of local negative prompts (line266), with subscript i from 1 to number of negative local prompts $N_{neg}$ in Eqn.6 and Eqn.7. We give a detailed explanation of relevant notations in the updated manuscripts. If the reviewer still has doubts about any of the symbol definitions, we are pleased to address your questions.\\n\\n#### **Q2: The difference between the proposed method and GL-MCM[Miyai et al. 2023b].**\", \"a2\": \"We would like to emphasize the difference between our method and GL-MCM[Miyai et al. 2023b]. First, from the motivation, GL-MCM focuses on **ID detection**, which is a totally different OOD setting (ours OOD detection as a reference). Consequently, it merely needs to detect **any of the regions that are similar to ID categories**, so the maximum process is reasonable. However, cases are totally different in the OOD detection setting, which has to detect any possible outlier regions. By contrast, the motivation of our method is to **detect hard OOD regions**. Moreover, we kindly remind that the proposed score is a small part of our method. Concretely, we propose local prompt and negative local prompts to enhance local outlier knowledge. We focus on hard OOD regions with generated hard OOD samples, and optimize the corresponding prompts with proposed loss functions to obtain outlier information.\\n\\nTo further enhance the explanation, we conduct OOD detection using GL-MLM score, the results are shown in the table below.\\n\\n|method|iNaturalist||SUN||Places||Texture||Average||\\n|-|-|-|-|-|-|-|-|-|-|-|\\n||FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC|FPR|AUORC\\n|GL-MCM|15.18| 96.71| 30.42| 93.09| 38.85 |89.90| 57.93| 83.63|35.47 |90.83|\\n|Ours (GL-MLM)|9.69|97.80|26.27|94.22|34.78|91.33|36.63|91.65|26.84| 93.75|\\n|Ours |8.63|98.07|23.23|95.12|31.74|92.42|34.50|92.29|**24.52**| **94.48**|\\n\\nIt can be concluded that **1)** compared with GL-MCM, our method achieves a consistent and substantial promotion (10% and 3% on average, respectively), which is in line with the analysis above that GL-MCM fails to achieve satisfying results in OOD detection; **2)** our R-MCM once more gets improvement against our GL-MCM, demonstrating that the proposed method better fits for enhancing OOD detection with fine outlier knowledge.\\n\\nWe compare the mentioned work in detail to show the unique strength and advantage of our method both theoretically and experimentally. If the reviewer still has questions, raise specific questions and we will try our best to address them.\"}", "{\"title\": \"Looking forward to your feedback\", \"comment\": \"Dear reviewer NiJR:\\n\\nWe respectfully appreciate again for your insightful and thoughtful comments! As the suggestions from the reviewer, we give thorough explanation and update the manuscripts accordingly.\\n\\nAs it is approaching the end of the discussion period (November 26 at 11:59 pm AoE) and we do not receive feedback, we sincerely hope you could look through our response and have a further comment at your convenience if you have any questions about the paper. We will do our best to address the issues of the reviewer.\\n\\nBest wishes,\\n\\nSubmission 6694 Authors.\"}" ] }
Ev4iw23gdI
EMMA: Empowering Multi-modal Mamba with Structural and Hierarchical Alignment
[ "Yifei Xing", "Xiangyuan Lan", "Ruiping Wang", "Dongmei Jiang", "Wenjun Huang", "Zheng Qingfang", "Yaowei Wang" ]
Mamba-based architectures have shown to be a promising new direction for deep learning models owing to their competitive performance and sub-quadratic deployment speed. However, current Mamba multi-modal large language models (MLLM) are insufficient in extracting visual features, leading to imbalanced cross-modal alignment between visual and textural latents, negatively impacting performance on multi-modal tasks. In this work, we propose Empowering Multi-modal Mamba with Structural and Hierarchical Alignment (EMMA), which enables the MLLM to extract fine-grained visual information. Specifically, we propose a pixel-wise alignment module to autoregressively optimize the learning and processing of spatial image-level features along with textual tokens, enabling structural alignment at the image level. In addition, to prevent the degradation of visual information during the cross-model alignment process, we propose a multi-scale feature fusion (MFF) module to combine multi-scale visual features from intermediate layers, enabling hierarchical alignment at the feature level. Extensive experiments are conducted across a variety of multi-modal benchmarks. Our model shows lower latency than other Mamba-based MLLMs and is nearly four times faster than transformer-based MLLMs of similar scale during inference. Due to better cross-modal alignment, our model exhibits lower degrees of hallucination and enhanced sensitivity to visual details, which manifests in superior performance across diverse multi-modal benchmarks. Code provided at https://github.com/xingyifei2016/EMMA.
[ "Multimodal models", "State space models", "Efficient architectures", "Mamba", "Computational Efficiency", "Multimodal Alignment" ]
Accept (Poster)
https://openreview.net/pdf?id=Ev4iw23gdI
https://openreview.net/forum?id=Ev4iw23gdI
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zTCsJ6886F", "ymQcjGNqrS", "yFOuRlJFhK", "xuXv3wk3HY", "xc2AmlMTM4", "w8zxH2hVMp", "viSOUT2PDo", "v8dP9sQlro", "u2dl3BEUBQ", "twZ27Yyh65", "r2ic0U5lHh", "ocHwRSuShm", "n0yuooXTqL", "mpNvIKiuoq", "m7fLss5fN9", "lIcnPwL0kh", "ilnMo46GOP", "iYY44hMvZx", "hMeWaIXMu9", "feP9yiTDox", "fTgbe67t4U", "dnFwJYC9UQ", "d9jlPFPYjE", "d85iu4k5rx", "d38oWuCjJz", "bjWkBcv7wr", "af9wNL78Zf", "ZmDOkXM7zs", "Y5XeK4IpFd", "XfzwSUAcC6", "XWgTwlyq6W", "XWEeFBar4s", "WFveNOfYhq", "Vhvfjm4XNn", "TkDRQeB5yp", "TIFUWMAZ8h", "Rw0cfPgGcA", "Rqb2f26X2N", "R4aiMWlBYp", "Qdx8H1sYnR", "QRaxyV2o8B", "Pd4AMw71PJ", "OpekKdgnvK", "LfviKUxuTc", "IEhUnIJutJ", "I2CFwvGn9n", "G1yVJMEXn4", "Cz51PVz38O", "B0rbJVrsiC", "9lqAeaM1yA", "7b060sju2g", "7Q1yq4g3xY", "759oosXahb", "6MpxocVT6C", "3hcvYYAXme", "2YjP0N5SRp" ], "note_type": [ "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1732206485371, 1730565403325, 1732211384844, 1737524038379, 1733201520920, 1732456737480, 1730084776728, 1732208599523, 1732205300822, 1732207713947, 1732547398075, 1732209327208, 1732209070861, 1732539006687, 1732206590170, 1732209503878, 1732209536945, 1732207631063, 1732207677305, 1732599408335, 1732210592058, 1732455355851, 1732442112954, 1731590995254, 1732208945045, 1730681604998, 1732209149270, 1732206237411, 1730770174214, 1732207732952, 1732211549366, 1732455919980, 1732211577167, 1732538269719, 1732441973318, 1732442073835, 1732210419879, 1732543282902, 1732210652529, 1732211038451, 1732599487389, 1732442140157, 1732206770140, 1732211659626, 1732211703610, 1732455201989, 1732539374308, 1732457708471, 1732547509271, 1732209306700, 1732210669857, 1732210489212, 1732208327311, 1732541295029, 1732211678580, 1734495707522 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Reviewer_T8Kc" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10276/Reviewer_T8Kc" ], [ "ICLR.cc/2025/Conference/Submission10276/Reviewer_yy1F" ], [ "ICLR.cc/2025/Conference/Submission10276/Reviewer_yy1F" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Area_Chair_5dM9" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Reviewer_yy1F" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Reviewer_5nFQ" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Reviewer_yCEA" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Reviewer_yy1F" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Reviewer_yy1F" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Area_Chair_5dM9" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Reviewer_yy1F" ], [ "ICLR.cc/2025/Conference/Submission10276/Reviewer_5nFQ" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Authors" ], [ "ICLR.cc/2025/Conference/Submission10276/Area_Chair_5dM9" ] ], "structured_content_str": [ "{\"title\": \"Q3. EMMA involves additional training steps, which should be explicitly detailed in the paper.\", \"comment\": \"**Q3. EMMA involves additional training steps, which should be explicitly detailed in the paper.**\\n\\nIn short, EMMA **does not involve additional training steps** besides a **singular finetuning stage** that jointly tunes the projector, LLM and decoder. \\n\\n**Training procedure of MLLMs.** We would like to first clarify the training procedure of vision-language models (VLMs). Here, we are specifically referring to models inspired by the LLaVA [14] architecture, which consists of a vision encoder, a multimodal projector and a backbone LLM, of which our work is based on. The training paradigm for LLaVA-like VLMs consists of two portions, pretraining and supervised finetuning. In the pretraining stage, all but the multimodal projector is frozen, where the projector is then trained on image-text pairs to learn the mapping from visual to token space. Then, in the supervised finetuning stage, only the vision encoder is frozen, while both the projector and backbone LLM are tuned, extending the LLM to multi-modal tasks. \\n\\n**Training procedure of EMMA.** Due to the effectiveness of the Mamba LLM, Cobra discovered that Mamba-based MLLMs do not benefit much from the pretraining stage. Thus, we follow the same procedure of Cobra and discard the pretraining stage altogether, and only conduct supervised finetuning for two epochs. For the additional structures introduced, we simply add the structural and hierarchical modules to our model, and jointly optimize with the original text generation objective. This simplifies the training of our methods, where only finetuning data is used to end-to-end finetune our entire model. We also do not require a separate stage nor additional data for the training of decoder or MFF module, unlike transformer-based EMU [15] and EMU2 [16].\"}", "{\"summary\": \"The paper introduces Empowering Multi-modal Mamba with Structural and Hierarchical Alignment (EMMA), enhancing Mamba multi-modal large language models (MLLM) by improving their ability to extract fine-grained visual information. EMMA uses a pixel-wise alignment module for better structural alignment and a multi-scale feature fusion (MFF) module to maintain visual information during cross-modal alignment. Extensive experiments demonstrate that EMMA achieves lower latency and faster performance compared to other Mamba-based MLLMs, with superior cross-modal alignment and reduced hallucination. The code for this model will be made available.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Well Written: The paper is clearly articulated, making the complex concepts accessible and easy to understand for the reader.\\n\\n2. Motivation Well Clarified: The authors firstly identify the imbalance in the quality of visual and textual latent features in MLLM Mamba models. They propose a pixel-wise alignment approach to autoregressively enhance the learning and processing of structured visual features, leading to improved cross-modal alignment between visual and textual latents.\\n\\n3. Experiments Solid: The paper presents comprehensive experiments conducted on a variety of multi-modal benchmarks. These experiments include thorough comparisons with current state-of-the-art models based on both Mamba and transformer architectures, demonstrating the robustness of the proposed approach.\", \"weaknesses\": \"1. Why is there an imbalance in the quality of visual and textual latents in the Mamba LLM, where coarse visual features are ineffectively aligned with higher-quality textual features? How does this differ from transformer-based models? Do transformer-based models face the same issue?\\n\\n2. Could you provide a more objective, quantitative analysis of the loss of fine-grained visual cues in the LLM, as the current visualizations with just a few images seem insufficient? Is this phenomenon common across MLLMs, or is it specific to the use of the Mamba LLM?\\n\\n3. Why do the current results show that the Mamba LLM still lags behind the transformer architecture? In this case, does the speed advantage gained by using the Mamba architecture sufficiently compensate for the performance loss?\", \"questions\": \"1. Currently, models use ViT as the image encoder. Is it possible to build an MLLM entirely based on Mamba? What potential advantages and challenges might this approach entail?\\n\\n2. In transformer-based LLMs, is there a similar issue with pixel-level alignment? Would the method proposed in the paper also be applicable to transformer architectures?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Q2. Could you provide a more objective, quantitative analysis of the loss of fine-grained visual cues in the LLM, as the current visualizations with just a few images seem insufficient? Is this phenomenon common across MLLMs, or is it specific to the use of the Mamba LLM?\", \"comment\": \"**Q2. Could you provide a more objective, quantitative analysis of the loss of fine-grained visual cues in the LLM, as the current visualizations with just a few images seem insufficient? Is this phenomenon common across MLLMs, or is it specific to the use of the Mamba LLM?**\\n\\n**Motivation behind loss of fine-grained visual cues.** We discovered the problem of the loss of fine-grained visual cues when directly extracting and comparing intermediate features of the Cobra [13] model. Specifically, we find that visual features in deeper layers of the Mamba LLM exhibit increased portions of noise (manifested in unnatural specs in the image) and loss of structural integrity (manifested in the loss of topological structures) in the Cobra baseline model, which can be found in our visualizations in the main paper. Subsequently, we designed the hierarchical alignment loss to enhance the participation of intermediate visual features to alleviate the gradual loss of visual information. \\n\\n**Quantitative metrics to account for loss of fine-grained visual cues.** When composing this paper, we also attempted different ways to directly quantitatively visualize loss of fine-grained visual features, such as calculating the L2 distance or KL-divergence of intermediate features. However, these results do not reflect the actuality of loss of fine-grained visual cues, as different intermediate layers may in fact be responsible for different scales of visual intricacy, and simple metrics do not reflect how the model loses fine-grained information. We propose that, since we are dealing with intermediate features, it would require a model-specific metric to evaluate this loss of information, as different models would also possess different intermediate representations for the same sample input. Thus, it becomes extremely hard to visualize quantitatively the effect on fine-grained visual cues besides qualitatively displaying these features. We have also researched MLLM literature that tackles relevant subjects such as visual feature extraction [6, 14, 15, 16], visual feature fusion [17, 18] and did not find any suitable metrics. Consequently, we have resorted to hallucination benchmarks such as POPE and HallusionBench to evaluate the capability of models to effectively utilize fine-grained visual cues.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for your response. My concerns are mostly addressed. I would like to retain the initial score.\"}", "{\"title\": \"Discussion on Q4 and Q5\", \"comment\": \"Q4: Thank you for expressing your concerns. The high-resolution setting can be referred to Mini-Gemini [1] and TokenPacker [2].\", \"q5\": \"Thanks for providing additional results. Authors are suggested to highlight the modification in different color, e.g., blue or red, in the revision.\\n\\n[1] Li Y, Zhang Y, Wang C, et al. Mini-gemini: Mining the potential of multi-modality vision language models[J]. arXiv preprint arXiv:2403.18814, 2024.\\n\\n[2] Li W, Yuan Y, Liu J, et al. Tokenpacker: Efficient visual projector for multimodal llm[J]. arXiv preprint arXiv:2407.02392, 2024.\\n\\n**Justification for my rating**\\n\\nThere are still some concerns remain. I hope to see the authors' responses to my questions. I will further consider raising my score if all my concerns are resolved. In the current stage, I am unable to give a higher score for this paper.\"}", "{\"summary\": \"Mamba-based architectures are promising for deep learning models but current Mamba multi-modal large language models (MLLM) are insufficient in extracting visual features. This paper proposes Empowering Multi-modal Mamba with Structural and Hierarchical Alignment (EMMA). It includes a pixel-wise alignment module for structural alignment at the image level to extract fine-grained visual information autoregressively. Additionally, a multi-scale feature fusion (MFF) module is proposed for hierarchical alignment at the feature level to prevent the degradation of visual information. Extensive experiments on various multi-modal benchmarks show that the model has lower latency than other Mamba-based MLLMs and is nearly four times faster than transformer-based MLLMs of similar scale during inference.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The imbalance in the quality of visual and texture latent in the Mamba-based VLM does make sense to investigate and this paper proposes an effective method, named Empowering Multi-modal Mamba with Structural and Hierarchical Alignment (EMMA) to solve this problem.\\n\\n2. The experiments are sufficient to verify the effectiveness of the proposed method.\\n\\n3. The paper is well-written and easy to follow.\", \"weaknesses\": \"1. EMMA essentially enhances the visual representation ability of the model. Therefore, the comparison with peers should be conducted, e.g. the contrastive learning in CLIP and SigLIP. Besides, does the masked image construction loss achieve a function similar to EMMA?\\n\\n2. EMMA uses the visual features from the vision encoder as the target of the Pixel-wise Alignment Loss. However, visual features from the vision encoder may lose the fine-grained information. Does this problem affect the performance of EMMA?\\n\\n3. High-resolution image is an important direction for MLLM. Structural constraints on the high-resolution visual features are an interesting point to discuss. Authors are suggested to conduct the experiment under a high-resolution setting.\\n\\n4. Authors are suggested to display more qualitative results.\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Q1. Continued.\", \"comment\": \"**Pixel-wise alignment is unique to Mamba.** EMMA is unique in the sense that it is the first to consider multimodal alignment specifically in Mamba-based MLLMs. We first observe the inconsistent performance of Mamba models with respect to visual and textual inputs when scaled-up in parameters. In specific, Mamba models tend to experience a significant performance decline in processing visual data when scaled-up, despite them being able to seamlessly handle textual data in large-scaled settings. On the other hand, to deploy Mamba models in MLLM settings, where large-scaled Mamba models are necessary in processing both textual and visual information, it is crucial to preserve a balance in the quality of visual and textual features to achieve multi-modal alignment. Previous Mamba MLLM works such as VL Mamba and Cobra directly replaces the transformer LLM with the Mamba LLM model without addressing this imbalance of feature quality. Furthermore, as a preliminary study for this work, we have also tried transformer-based multimodal alignment approaches that enhance visual features or encourage multimodal alignment, such as visual-feature compression [9], masked-alignment [10, 11], and contrastive losses [12, 13], all of which fail in the Mamba LLM setting. Here, we show the respective model performances on the TextVQA dataset, which we find is a good indicator for overall model performance.\\n\\n\\n| Model | EMMA-V1 | EMMA-V2 | Cobra | Compression | Masked | Contrastive |\\n|--------------|---------|---------|-------|-------------|--------|-------------|\\n| TextVQA | 57.2 | 56.2 | 52.4 | 45.8 | 46.1 | 50.2 |\\n\\n\\n\\nAs shown, transformer-based approaches tailored for multimodal alignment in MLLM settings may in fact negatively impact model performance, which renders the necessity to devise Mamba-specific multi-modal alignment approaches. On the other hand, autoregressive pretraining on the visual features are shown to improve visual Mamba models [6], and for the first time allows visual Mamba models to scale up to large-sized models (600M). Given the effectiveness of autoregressive pretraining in promoting the single-modal visual Mamba model, we extend this approach to the multimodal LLM setting. Subsequently, we propose the pixel-wise alignment loss to encourage the generated autoregressive features to structurally align with the input image. Furthermore, when visualizing the LLM features, we notice a gradual decline of the intermediate visual features, causing the final generated visual feature to degrade. In response, we propose the multi-scale feature fusion module to tackle this issue, leading to better preservation of intermediate visual features, inducing better multimodal alignment of visual and textual features. In essence, our method specifically tackles the issue of multi-modal alignment in Mamba models, while the direct application of transformer-based multimodal methods have failed. As a result, EMMA demonstrates improved performances on a variety of metrics, and most importantly, significantly reduces the degree of visual hallucinations in the HallusionBench benchmark, surpassing transformer models with twice the size.\\n\\n**Pixel-wise alignment is not unique to Mamba?**\\nAs presented in our work, transformer-based models have also utilized similar methodologies [14, 15] in autoregressively generating visual features along with textual cues. Thus, the general idea of image-wise autoregressive loss is not unique to Mamba, in the sense that transformer-based methods also benefit from it. However, we still believe our implementation and execution of this idea is unique and meaningful in the setting of MLLMs, specifically in the sense of efficiency. While the aforementioned transformer methods achieve overall improvements in various MLLM benchmarks, they nevertheless introduce additional training stages that are specifically tailored for the visual decoder, which is used to generate the autoregressively generated visual images. Their decoders are also complex, usually consisting of stable-diffusion models that yield huge amounts of parameters. On the other hand, our Mamba-based visual decoder is extremely light-weight (around 200M parameters), and trained end-to-end together with the textual token loss. Thus, it requires no extra training data or training stages. Hence, we show an efficiently implemented Mamba-based decoder is sufficient to achieve image-wise self-supervision to the multimodal LLM.\"}", "{\"title\": \"Q1. Could the authors provide the computational complexity of the cross-attention operations compared to the overall method?\", \"comment\": \"**Q1. Could the authors provide the computational complexity of the cross-attention operations compared to the overall method?**\\n\\nTo summarize, cross-attention only introduces negligible amounts of parameters compared to the overall method, and thus do not contribute much to the overall complexity of our method. \\n\\n**Computation cost analysis.** We provide the following table for parameter counts of different components in our model. $Time_{Trn}$ and $Memory_{Trn}$ denote training-wise time and memory requirements, while $Time_{Eval}$ and $Memory_{Eval}$ denote evaluation-wise time and memory requirements. Decoder refers to the decoder for generating visual features for pixel-wise alignment, while MFF refers to the multi-scale feature fusion module for the hierarchical alignment. $CA$ refers to the sum of all additional cross-attention modules introduced, to separately account for the computational complexity of cross-attention operations. Note that these parameters are also accounted in MFF. Memory is in terms of MiB, and time is in terms of seconds. The memory of each component is calculated by $$memory_{post-module} - memory_{pre-module}$$ The time of each component is calculated by $$time_{post-module} - time_{pre-module}$$ The training results include additional loss calculation and optimization passes, which introduce additional time and memory usage. Note that the inference time reported here multiplied by 256 does not give the actual model inference time as presented in Table 4 (in original paper), this is due to the incorporation of cached operations during inference (which are common in all MLLM models, including transformers), which only necessitate one pass through the ViT and reduced tokens for the LLM during the text generation phase. Here we report actual processing times instead of cached times for the sake of performance measure, but note that cached times are more suitable and realistic for deployment. The following breakdown is performed uniformly with a batch size of 1, evaluated on a NVIDIA A100 GPU. \\n\\n| Components | Parameters | Time_{Trn} | Memory_{Trn} | Time_{Eval} | Memory_{Eval} |\\n|----------------------|------------|------------|--------------|-------------|---------------|\\n| Visual Encoder (ViT) | 731M | 2e-2 | 3,600 | 2e-2 | 3,600 |\\n| Projection MLP | 47M | 2e-3 | 498 | 4e-4 | 460 |\\n| MambaV1-2.8B LLM | 2.8B | 7e-1 | 25,266 | 7e-2 | 11,140 |\\n| MambaV2-2.7B LLM | 2.7B | 7e-1 | 26,210 | 6e-2 | 11,054 |\\n| Decoder | 174M | 5e-1 | 1,204 | N/A | N/A |\\n| MFF | 266M | 6e-3 | 1,346 | N/A | N/A |\\n| CA | 105M | 1e-3 | 892 | N/A | N/A |\\n\\nAs shown, cross-attention modules only account **for a very small proportion (roughly 3 percent) of the model's optimization parameters**. Additionally, **the cross-attention here is only quadratic in terms of the length of image patches and does not vary based on the length of text**, hence not truly quadratic in the strictest sense. Hence the memory and time usage of the cross attention module is extremely small. Also, since the decoder and MFF modules are not utilized during inference time, they pose no additional computation cost during testing.\"}", "{\"title\": \"References\", \"comment\": \"1. Hezheng Lin, Xing Cheng, Xiangyu Wu, and Dong Shen. Cat: Cross attention in vision transformer. In 2022 IEEE international conference on multimedia and expo (ICME), pp. 1\\u20136. IEEE, 2022.\\n2. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in neural information processing systems, 35:23716\\u201323736, 2022.\\n3. Kaibing Chen, Dong Shen, Hanwen Zhong, Huasong Zhong, Kui Xia, Di Xu, Wei Yuan, Yifei Hu, Bin Wen, Tianke Zhang, et al. Evlm: An efficient vision-language model for visual understanding. arXiv preprint arXiv:2407.14177, 2024.\\n4. Zonghao Guo, Ruyi Xu, Yuan Yao, Junbo Cui, Zanlin Ni, Chunjiang Ge, Tat-Seng Chua, Zhiyuan Liu, and Gao Huang. Llava-uhd: an lmm perceiving any aspect ratio and high-resolution images. In European Conference on Computer Vision, pp. 390\\u2013406. Springer, 2025.\\n5. Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023.\\n6. Tri Dao and Albert Gu. Transformers are ssms: Generalized models and efficient algorithms through structured state space duality. arXiv preprint arXiv:2405.21060, 2024.\\n7. Opher Lieber, Barak Lenz, Hofit Bata, Gal Cohen, Jhonathan Osin, Itay Dalmedigos, Erez Safahi, Shaked Meirom, Yonatan Belinkov, Shai Shalev-Shwartz, et al. Jamba: A hybrid transformer-mamba language model. arXiv preprint arXiv:2403.19887, 2024.\\n8. Jamba Team, Barak Lenz, Alan Arazi, Amir Bergman, Avshalom Manevich, Barak Peleg, Ben Aviram, Chen Almagor, Clara Fridman, Dan Padnos, et al. Jamba-1.5: Hybrid transformer-mamba models at scale. arXiv preprint arXiv:2408.12570, 2024.\\n9. Paolo Glorioso, Quentin Anthony, Yury Tokpanov, James Whittington, Jonathan Pilault, Adam Ibrahim, and Beren Millidge. Zamba: A compact 7b ssm hybrid model. arXiv preprint arXiv:2405.16712, 2024.\\n10. Soham De, Samuel L Smith, Anushan Fernando, Aleksandar Botev, George Cristian-Muraru, Albert Gu, Ruba Haroun, Leonard Berrada, Yutian Chen, Srivatsan Srinivasan, et al. Griffin: Mixing gated linear recurrences with local attention for efficient language models. arXiv preprint arXiv:2402.19427, 2024.\\n11. Lucidrains. Linear attention transformer. 2021. URL https://github.com/lucidrains/linear-attention-transformer.\\n12. Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768, 2020.\\n13. A. Katharopoulos, A. Vyas, N. Pappas, and F. Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In Proceedings of the International Conference on Machine Learning (ICML), 2020. URL https://arxiv.org/abs/2006.16236.\\n14. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36, 2024.\\n15. Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Emu: Generative pretraining in multimodality. In The Twelfth International Conference on Learning Representations, 2023a.\\n16. Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Generative pretraining in multimodality. arXiv preprint arXiv:2307.05222, 2023b.\\n17. Alexey Dosovitskiy. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.\\n18. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012.\\n19. Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Donald Metzler. Long range arena: A benchmark for efficient transformers. arXiv preprint arXiv:2011.04006, 2020.\\n20. Yue Liu, Yunjie Tian, Yuzhong Zhao, Hongtian Yu, Lingxi Xie, Yaowei Wang, Qixiang Ye, and Yunfan Liu. Vmamba: Visual state space model. arXiv preprint arXiv:2401.10166, 2024.\\n21. Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, and Xinggang Wang. Vision mamba: Efficient visual representation learning with bidirectional state space model. arXiv preprint arXiv:2401.09417, 2024.\\n22. Sucheng Ren, Xianhang Li, Haoqin Tu, Feng Wang, Fangxun Shu, Lei Zhang, Jieru Mei, Linjie Yang, Peng Wang, Heng Wang, et al. Autoregressive pretraining with mamba in vision. arXiv preprint arXiv:2406.07537, 2024.\\n23. Wentong Li, Yuqian Yuan, Jian Liu, Dongqi Tang, Song Wang, Jie Qin, Jianke Zhu, and Lei Zhang. Tokenpacker: Efficient visual projector for multimodal llm. arXiv preprint arXiv:2407.02392, 2024.\"}", "{\"title\": \"Thank you again for you comments!\", \"comment\": \"We greatly appreciate your feedback and up-rating of our paper. We will incorporate the motivation and rationales in the camera-ready version of the paper if accepted. Thank you again for your valuable time and input!\"}", "{\"title\": \"References\", \"comment\": \"1. Alexey Dosovitskiy. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.\\n2. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012.\\n3. Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Donald Metzler. Long range arena: A benchmark for efficient transformers. arXiv preprint arXiv:2011.04006, 2020.\\n4. Yue Liu, Yunjie Tian, Yuzhong Zhao, Hongtian Yu, Lingxi Xie, Yaowei Wang, Qixiang Ye, and Yunfan Liu. Vmamba: Visual state space model. arXiv preprint arXiv:2401.10166, 2024.\\n5. Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, and Xinggang Wang. Vision mamba: Efficient visual representation learning with bidirectional state space model. arXiv preprint arXiv:2401.09417, 2024.\\n6. Sucheng Ren, Xianhang Li, Haoqin Tu, Feng Wang, Fangxun Shu, Lei Zhang, Jieru Mei, Linjie Yang, Peng Wang, Heng Wang, et al. Autoregressive pretraining with mamba in vision. arXiv preprint arXiv:2406.07537, 2024.\\n7. Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023.\\n8. Tri Dao and Albert Gu. Transformers are ssms: Generalized models and efficient algorithms through structured state space duality. arXiv preprint arXiv:2405.21060, 2024.\\n9. Wentong Li, Yuqian Yuan, Jian Liu, Dongqi Tang, Song Wang, Jie Qin, Jianke Zhu, and Lei Zhang. Tokenpacker: Efficient visual projector for multimodal llm. arXiv preprint arXiv:2407.02392, 2024.\\n10. David Mizrahi, Roman Bachmann, Oguzhan Kar, Teresa Yeo, Mingfei Gao, Afshin Dehghan, and Amir Zamir. 4m: Massively multimodal masked modeling. Advances in Neural Information Processing Systems, 36, 2024.\\n11. Yonatan Bitton, Gabriel Stanovsky, Michael Elhadad, and Roy Schwartz. Data efficient masked language modeling for vision and language. arXiv preprint arXiv:2109.02040, 2021.\\n12. Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, and Kurt Keutzer. How much can clip benefit vision-and-language tasks? arXiv preprint arXiv:2107.06383, 2021.\\n13. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp.8748\\u20138763. PMLR, 2021\\n14. Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Emu: Generative pretraining in multimodality. In The Twelfth International Conference on Learning Representations, 2023a.\\n15. Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Generative pretraining in multimodality. arXiv preprint arXiv:2307.05222, 2023b.\\n16. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 26296\\u201326306, 2024.\\n17. Junke Wang, Lingchen Meng, Zejia Weng, Bo He, Zuxuan Wu, and Yu-Gang Jiang. To see is to believe: Prompting gpt-4v for better visual instruction tuning. arXiv preprint arXiv:2311.07574, 2023.\\n18. Agrim Gupta, Piotr Dollar, and Ross Girshick. Lvis: A dataset for large vocabulary instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5356\\u20135364, 2019.\\n19. Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. Mitigating hallucination in large multi-modal models via robust instruction tuning. In The Twelfth International Conference on Learning Representations, 2023.\\n20. Zhe Li, Haiwei Pan, Kejia Zhang, Yuhua Wang, and Fengming Yu. Mambadfuse: A mamba-based dual-phase model for multi-modality image fusion. arXiv preprint arXiv:2404.08406, 2024.\\n21. Wenhao Dong, Haodong Zhu, Shaohui Lin, Xiaoyan Luo, Yunhang Shen, Xuhui Liu, Juan Zhang, Guodong Guo, and Baochang Zhang. Fusion-mamba for cross-modality object detection. arXiv preprint arXiv:2404.09146, 2024.\\n22. Xinyu Xie, Yawen Cui, Chio-In Ieong, Tao Tan, Xiaozhi Zhang, Xubin Zheng, and Zitong Yu. Fusionmamba: Dynamic feature enhancement for multimodal image fusion with mamba. arXiv preprint arXiv:2404.09498, 2024.\"}", "{\"title\": \"Q3. Also, the decoded procedure via Mamba is not illustrated clearly.\", \"comment\": \"**Q3. Also, the decoded procedure via Mamba is not illustrated clearly. Is SFT or a similar finetuning process still needed for training VLM? Is related VQA data still required? Such technical details are not shown clearly in the manuscript to show the overall model tuning picture. It is better to provide a step-by-step explanation of the decoding procedure and clarification on the fine-tuning process to help address the ambiguity.**\\n\\n**Decoder procedure.** We do not include a separate stage for training the decoder, but rather jointly optimize the decoder, LLM, and projector in a singular finetuning stage. Thus, the overall model tuning picture consists of a single finetuning stage where we only freeze the vision encoder, and jointly tune the MLP projector, LLM, and decoder on all available training data, for 2 epochs. During training, the model concurrently generates a textual response used for the textual autoregressive loss, and a visual response used for the pixel-alignment loss. Both losses are combined and optimized simultaneously. The decoder does not use extra VQA data to be optimized.\"}", "{\"title\": \"Further discussion for Q2.\", \"comment\": \"As a preliminary study for this paper, we have experimented with masked language modeling, where certain text tokens have been masked and predicted by the model. It can be seen as a text-based version of masked reconstruction loss. However, we observe that this approach also reduces performance (as shown in the Table in **Further discussion on Q1**). Thus, we have resorted to alternative design choices when designing EMMA. However, we would assume that using MAE on Mamba-based MLLMs would further reduce performance. Since Mamba MLLMs already experience degradation on text-wise masked reconstruction, their image counterparts may experience further degradation, as Mamba models struggle even further with visual latents.\\n\\n**Difference between MAE and pixel-alignment loss in terms of effectiveness.** We argue that our pixel-alignment loss is a more general version of masked image construction loss. Due to the linear scanning mechanism of Mamba, which processes inputs in sequential order, masking the input at certain tokens is equivalent to utilizing the previous token to predict the masked token autoregressively. Thus, pixel-alignment loss can be seen as a more general form of masked image construction loss where all pixels have been masked and require prediction. This results in requiring less ground-truth knowledge of pixel values compared to MAE loss, which in turn translates to better generalizability and adaptability. On the other hand, by masking out random pixels, MAE may accidentally damage the overall structure of the image when key pixels in the image are lost due to random masking. This results in worsened multimodal alignment as the overall structure is not preserved for visual supervision in Mamba LLMs.\\n\\nWe will try our best to conduct further experimentation regarding this issue, however, due to limited time and computational resources, we may not finish in time. However, we will continue running them and include them in the appendix of the final version of this paper if accepted.\"}", "{\"title\": \"Q4. Furthermore, when comparing EMMA to other methods, the additional training costs should be listed separately to ensure a fair comparison\", \"comment\": \"**Q4. Furthermore, when comparing EMMA to other methods, the additional training costs should be listed separately to ensure a fair comparison**\\n\\n**Fair comparison of training costs.** We have provided the complete breakdown of training and testing costs in the table for Q1. The visual encoder has the same train-time and evaluation-time values due to it being frozen and un-optimized during either training and testing. All other components utilize less memory and time resources in evaluation, compared to training. The newly introduced structures, decoder and MFF, are discarded during testing. Note that the training time for decoder is relatively large compared to other structures given its small size. This is due to the calculation of respective gradients that concern the update of LLM parameters due to pixel-alignment loss.\"}", "{\"title\": \"Q1. Why is there an imbalance in the quality of visual and textual latents in the Mamba LLM, where coarse visual features are ineffectively aligned with higher-quality textual features? How does this differ from transformer-based models? Do transformer-based models face the same issue?\", \"comment\": \"**Q1. Why is there an imbalance in the quality of visual and textual latents in the Mamba LLM, where coarse visual features are ineffectively aligned with higher-quality textual features? How does this differ from transformer-based models? Do transformer-based models face the same issue?**\", \"a_short_answer_to_your_question_is\": \"transformer models also experience this issue, however, visual Mamba models generally experience significantly worse performance degradations when scaling-up, rendering the need for effective visual processing in large-scaled Mamba models.\\n\\n**Imbalance in the quality of visual and textual latents in Mamba models.** We argue for the imbalanced quality of visual and textual latents in Mamba models through two perspectives, architecturally and empirically. \\n\\n**Architectural imbalance.** The inherent linear scanning mechanism of Mamba poses challenges in effectively extracting visual features on the same level as textual features. On one hand, transformer-based [1] approaches utilize the self-attention mechanism that extracts global relationships between different parts of the input sequence. They also utilize a position encoding for learning spatial relationships of visual tokens. On the other hand, CNN-based approaches [2] utilize local receptive fields to connect each neuron to a local region of the input data, allowing interaction of neighboring pixels in all directions. The effectiveness of both methods in processing visual information lies in their ability to account for global, or to the least, local clusters of the visual input. However, Mamba utilizes a selective-scan mechanism that only processes sequences in a linear order. While Mamba models achieve excellent performances in textual data, effectively capturing long-term dependencies in multiple language benchmarks [3], Mamba models possess difficulties in handling image data, where pixel-wise relationships may span multiple rows and columns. Visual Mamba models have been proposed to counteract this issue through combining additional scans from different directions [4] and utilizing a similar position encoding as ViTs [5]. However, they nevertheless still utilize the linear selective-scan mechanism. In the case where features possess large embedding dimensions, which is typical in MLLM scenarios, this becomes hard for the model to extract visual cues that are far apart spatially, as the degree of freedom between each visual token across different rows and columns become huge. \\n\\n**Empirical Imbalance.** Our work is inspired by [6] that observes the performance degradation of visual Mamba models when scaled up. Below is a table taken directly from their paper [7, 8].\\n\\n| Model | VIM-B | VIM-L | VIM-H | \\n|--------------|-------|-------|-------|\\n| Size (M) | 98 | 340 | 755 | \\n| ImageNet Acc.| 81.2 | 81.0 | Collapsed | \\n\\n\\nAs shown, plain Mamba models experience a severe degradation in processing visual features when scaled up. In the 700M parameter-scale, the visual Mamba collapses entirely, rendering the need for effective approaches to alleviate this performance degradation, specifically in Mamba-based models. On the other hand, plain Mamba models do not experience performance degradation in processing textual data, as shown from the experiment results in the original Mamba papers [7, 8]. \\n\\n| Model | Mamba-790M | Mamba-1.4B | Mamba-2.8B | Mamba2-780M | Mamba2-1.3B | Mamba2-2.7B |\\n|--------------|------------|------------|------------|-------------|-------------|-------------|\\n| Size (M) | 790 | 1,400 | 2,800 | 780 | 1,300 | 2,700 |\\n| HellaSwag | 55.1 | 59.1 | 66.1 | 54.9 | 59.9 | 66.6 |\\n| LAMBADA | 62.7 | 64.9 | 69.2 | 61.7 | 65.7 | 69.7 |\\n\\n\\nThus, we observe an imbalance of the processing capabilities of Mamba on visual and textual inputs, when scaled up in billion-parameter models. This phenomenon renders the need for more effective visual processing in large-scaled Mamba models to ensure that visual and textual features are adequately aligned.\"}", "{\"title\": \"Q1. EMMA essentially enhances the visual representation ability of the model. Therefore, the comparison with peers should be conducted, e.g. the contrastive learning in CLIP and SigLIP.\", \"comment\": \"**Q1. EMMA essentially enhances the visual representation ability of the model. Therefore, the comparison with peers should be conducted, e.g. the contrastive learning in CLIP and SigLIP.**\\n\\nWe believe that EMMA and CLIP-related models cannot be adequately compared due to their structural and architectural differences. We explain our reasoning in the following paragraphs.\\n\\n**Model architecture for EMMA.** EMMA is an instruction-tuned large multimodal model for general-purpose visual and language understanding. It is architecturally similar to its MLLM transformer counterparts, such as LLaVA [1], which mostly consists of a ViT, MLP projector, and an LLM backbone. Given an image and a corresponding textual prompt, EMMA is trained to generate a corresponding textual response (and an image response). \\n\\n**Difference between CLIP models and MLLM models.** CLIP [2] and SigLIP [3] models, on the other hand, belong to the class of contrastive language-image pretraining models, which consist of an image encoder and a text encoder. Unlike instruction-tuned large multimodal models, which utilize the large language model to process features from both modalities and output textual responses, CLIP and related models encode both modalities separately and only output probability scores that reflect the alignment between the input image and text. Subsequently, CLIP and related models are mostly used in retrieval tasks, while EMMA and LLaVA are used in visual question-answering tasks. Because they possess different underlying structures and produce different forms of output, they cannot be equally compared in this manuscript and can be investigated in future work.\"}", "{\"title\": \"Q6. Provide more detailed analysis of how their method addresses specific challenges in multimodal learning that previous methods have struggled with.\", \"comment\": \"**Q6. EMMA\\u2019s approach is somewhat simplistic, as it merely enhances the consistency between deep and shallow features and the original image at the output stage. This method requires extra training, substantial computational resources, and introduces additional structures during deployment. As a result, the technical contribution appears somewhat trivial. It would be better if this paper could provide more detailed analysis of how their method addresses specific challenges in multimodal learning that previous methods have struggled with.**\\n\\n**Technical Contributions.** We believe our main contribution of this paper lies in achieving better alignment of visual and textual features, specifically in large-scaled multi-modal Mamba models. In the paper, we first observe the inconsistent performance of Mamba models with respect to visual and textual inputs when scaled-up in parameters. In specific, Mamba models tend to experience a significant performance decline in processing visual data when scaled-up, despite them being able to seamlessly handle textual data in large-scaled settings. We analyze this issue in two perspectives, both architecturally and empirically. \\n\\n**Architectural imbalance.** The inherent linear scanning mechanism of Mamba poses challenges in effectively extracting visual features on the same level as textual features. On one hand, transformer-based [17] approaches utilize the self-attention mechanism that extracts global relationships between different parts of the input sequence. They also utilize a position encoding for learning spatial relationships of visual tokens. On the other hand, CNN-based approaches [18] utilize local receptive fields to connect each neuron to a local region of the input data, allowing interaction of neighboring pixels in all directions. The effectiveness of both methods in processing visual information lies in their ability to account for global, or to the least, local clusters of the visual input. However, Mamba utilizes a selective-scan mechanism that only processes sequences in a linear order. While Mamba models achieve excellent performances in textual data, effectively capturing long-term dependencies in multiple language benchmarks [19], Mamba models possess difficulties in handling image data, where pixel-wise relationships may span multiple rows and columns. Visual Mamba models have been proposed to counteract this issue through combining additional scans from different directions [20] and utilizing a similar position encoding as ViTs [21]. However, they nevertheless still utilize the linear selective-scan mechanism. In the case where features possess large embedding dimensions, which is typical in MLLM scenarios, this becomes hard for the model to extract visual cues that are far apart spatially, as the degree of freedom between each visual token across different rows and columns become huge. \\n\\n\\n\\n**Empirical imbalance.** Mamba models experiences an imbalance in the empirical evaluation of vision tasks and textual tasks, when scaled up in parameters. [22] observes the performance degradation of visual Mamba models when scaled up. Below is a table taken directly from their paper.\\n\\n\\n\\n| Model | VIM-B | VIM-L | VIM-H |\\n|--------------|-------|-------|-------|\\n| Size (M) | 98 | 340 | 755 |\\n| ImageNet Acc.| 81.2 | 81.0 | Collapsed |\\n\\nAs shown, plain Mamba models experience a severe degradation in processing visual features when scaled up. In the 700M parameter-scale, the visual Mamba collapses entirely, rendering the need for effective approaches to alleviate this performance degradation, specifically in Mamba-based models. On the other hand, plain Mamba models do not experience performance degradation in processing textual data, as shown from the experiment results in the original Mamba papers [5, 6]. \\n\\n| Model | Mamba-790M | Mamba-1.4B | Mamba-2.8B | Mamba2-780M | Mamba2-1.3B | Mamba2-2.7B |\\n|--------------|------------|------------|------------|-------------|-------------|-------------|\\n| Size (M) | 790 | 1,400 | 2,800 | 780 | 1,300 | 2,700 |\\n| HellaSwag | 55.1 | 59.1 | 66.1 | 54.9 | 59.9 | 66.6 |\\n| LAMBADA | 62.7 | 64.9 | 69.2 | 61.7 | 65.7 | 69.7 |\\n\\n\\nThus, we observe an imbalance of the processing capabilities of Mamba on visual and textual inputs, when scaled up in billion-parameter models. This phenomenon renders the need for more effective visual processing in large-scaled Mamba models to ensure that visual and textual features are adequately aligned.\"}", "{\"title\": \"Q6. Continued\", \"comment\": \"**Application of Mamba in MLLMs.** On the other hand, to deploy Mamba models in MLLM settings, where large-scaled Mamba models are necessary in processing both textual and visual information, it is crucial to preserve a balance in the quality of visual and textual features to achieve multi-modal alignment. Previous Mamba MLLM works such as VL Mamba and Cobra directly replaces the transformer LLM with the Mamba LLM model without addressing this imbalance of feature quality. Still, the performance of these models show the potential of extending mambas to the realm of MLLMs, especially with their fast inference speeds.\\n\\n\\n\\n**Ineffectiveness of transformer-based techniques.** Furthermore, as a preliminary study for this work, we have also tried transformer-based multimodal alignment approaches that enhance visual features or encourage multimodal alignment, such as visual-feature compression [23], masked-alignment [24, 25], and contrastive losses [26, 27], all of which fail in the Mamba LLM setting. Here, we show the respective model performances on the TextVQA dataset, which we find is a good indicator for overall model performance. \\n\\n| Model | EMMA-V1 | EMMA-V2 | Cobra | Compression | Masked | Contrastive |\\n|--------------|---------|---------|-------|-------------|--------|-------------|\\n| TextVQA | 57.2 | 56.2 | 52.4 | 45.8 | 46.1 | 50.2 |\\n\\nAs shown, transformer-based approaches tailored for multimodal alignment in MLLM settings may in fact negatively impact model performance, which renders the necessity to devise Mamba-specific multi-modal alignment approaches. \\n\\n\\n**Novelty of our approach.** On the other hand, autoregressive pretraining on the visual features are shown to improve visual Mamba models [22], and for the first time allows visual Mamba models to scale up to large-sized models (600M). Given the effectiveness of autoregressive pretraining in promoting the single-modal visual Mamba model, we extend this approach to the multimodal LLM setting. Subsequently, we propose the pixel-wise alignment loss to encourage the generated autoregressive features to structurally align with the input image. Furthermore, when visualizing the LLM features, we notice a gradual decline of the intermediate visual features, causing the final generated visual feature to degrade. In response, we propose the multi-scale feature fusion module to tackle this issue, leading to better preservation of intermediate visual features, inducing better multimodal alignment of visual and textual features. In essence, our method specifically tackles the issue of multi-modal alignment in Mamba models, while the direct application of transformer-based multimodal methods have failed. As a result, EMMA demonstrates improved performances on a variety of metrics, and most importantly, significantly reduces the degree of visual hallucinations in the HallusionBench benchmark, surpassing transformer models with twice the size. We believe this study provides groundwork for further research of Mamba models in the MLLM framework as a potential replacement for transformer-based approaches, with their comparable performance and superior inference latency.\\n\\n\\n**Potential impact and application of our observations and techniques.** On one hand, we believe the insights introduced in this paper will help inspire future research of mamba models in the realm of MLLM and other downstream tasks. Specifically, our approach to strengthen the quality of visual features in large-scaled mamba models may provide future researchers with a framework for multimodal alignment in Mamba models. Our light-weight Mamba-based decoder also serves as an efficient means to conduct feature reconstruction in the visual modality. Furthermore, our multi-scale feature fusion module can be an inspiration for combining same-modality features in future Mamba-based frameworks. On the other hand, we aim for our research to pitch Mamba as a potential contender among transformer-based architectures. Given Mamba's swift inference capabilities, we show in this work that Mamba-based MLLMs can achieve competitive performance with transformer models, and underscores its potential for widespread adoption and integration into diverse AI applications.\"}", "{\"comment\": \"Dear Reviewer yCEA,\\n\\nCould you kindly review the rebuttal thoroughly and let us know whether the authors have adequately addressed the issues raised or if you have any further questions.\\n\\nBest,\\n\\nAC\"}", "{\"title\": \"Q4. High-resolution image is an important direction for MLLM. Structural constraints on the high-resolution visual features are an interesting point to discuss. Authors are suggested to conduct the experiment under a high-resolution setting.\", \"comment\": \"**Q4. High-resolution image is an important direction for MLLM. Structural constraints on the high-resolution visual features are an interesting point to discuss. Authors are suggested to conduct the experiment under a high-resolution setting.**\\n\\n\\n**Current high-resolution benchmarks.** Thank you for bringing this issue to our attention. For our benchmark performance report, we only conducted experiments on commonly-used VQA benchmarks from previous MLLM works. Subsequently, we have researched the current literature for high-resolution benchmarks. [5] proposes a high-resolution benchmark for multimodal models named MagnifierBench, that tests the ability of the model to discern the details of small objects in high-resolution input images. However, this dataset remains unavailable to public, where their huggingface repo link returns a 404 error. (Please see https://github.com/Luodian/Otter/issues/343). HRVQA [6] is another benchmark proposed to evaluate the understanding capability of VQA models for aerial images in 1024 x 1024 resolution. However, their benchmark requires an evaluation server to evaluate the model predictions, which remain unavailable on their official project page (https://hrvqa.nl/). Consequently, we are unable to find any high-resolution VQA benchmarks that suit the evaluation of MLLMs. If you have any high-resolution VQA benchmarks for multimodal LLMs in mind, please let us know and we will test our model at once.\"}", "{\"title\": \"Discussion on Q2\", \"comment\": \"Thanks for your clarification. Is it possible to conduct the comparison experiments between them?\"}", "{\"title\": \"Looking forward to further discussions!\", \"comment\": \"Dear Reviewer T8Kc,\\n\\nThank you for your insightful comments. We were wondering if our response and revision have resolved your concerns. We have attempted to address your initial questions through our replies and are eager to clarify any further points you might raise. Please feel free to provide additional feedback. We greatly appreciate your continued engagement.\\n\\nBest regards, Authors\"}", "{\"title\": \"A grateful thanks to all reviewers.\", \"comment\": \"We express our gratitude to all the reviewers for their meticulous reviews and valuable feedback on our paper. The effort put forth in posing insightful questions will greatly aid us in refining and clarifying our work. We are pleased to learn that the motivation has been effectively clarified (as noted by Reviewer T8Kc and yy1F), the paper is well-written (highlighted by Reviewer T8Kc and yy1F), and the experiments substantiate the efficacy of our approach (acknowledged by all reviewers). Over the next few days, we will diligently work towards addressing all raised concerns in order to enhance the quality and comprehensiveness of this work. Thanks again for reviewing our work!\"}", "{\"title\": \"Q2. The inputs require both images and captions, how the captions are acquired are not illustrated well. A more detailed description of the data preparation process is desired here.\", \"comment\": \"**Q2. The inputs require both images and captions, how the captions are acquired are not illustrated well. A more detailed description of the data preparation process is desired here.**\\n\\n**Training Data and Preparation.** We train our model using the combination of LLaVA-v1.5-mixed-665k, LVIS-Instruct-4V, and LRV-Instruct datasets. Here we will provide a more detailed description of the composition of each dataset, as well as the overall data preparation process. \\n\\n**LLaVA-v1.5-mixed-665k.** This is the same dataset used for finetuning the LLaVA-v1.5 model [16]. We provide the breakdown for this dataset as a comma-separated list with the origin and corresponding number of samples: LLaVA 158K, ShareGPT 40K, VQAv2 83K, GQA 72K, OKVQA 9K, OCRVQA 80K, A-OKVQA 66K, TextCaps 22K, RefCOCO 48K, and VG 86K, for a total of 665K data samples. According to the LLaVA-v1.5 paper, these data are preprocessed as follows: 1. For all VQA datasets, QA pairs from the same training image are merged into a single conversation. 2. For ShareGPT, they filter out invalid conversations. They also truncate long conversations that surpass 2048 tokens. 3. Each QA pair in A-OKVQA is augmented k times, where k is the number of choices per question, to counterbalance the lack of multiple-choice data. 4. They sample 80K conversations from OCRVQA. 5. For Visual Genome, they sample 10 annotations for images with additional annotations. 6. For RefCOCO, conversations are dissected into segments, each containing fewer than 10 conversations. 7. All data splits are concatenated together to form the final 665K dataset.\\n\\n\\n**LVIS-Instruct-4V.** This is the dataset presented by [17], in an attempt to curate a fine-grained instruction-following dataset by explicitly conditioning the generation of instruction data on both language and visual inputs. To generate this dataset, they leverage image data from LVIS [18], as well as their fine-grained annotations to prompt the GPT-4V model to undertake two key tasks: (1) generate conversational question-answer lists through self-reasoning and (2) produce high-quality image descriptions guided by precise bounding box information. In the end, they use 110K images from LVIS and generate 220K high-quality visual instructions, which consist of 110K conversational data and 110K descriptional data. These images along with GPT-4V-generated captions form this dataset. \\n\\n**LRV-Instruct.** This dataset [19] leverages GPT4 to cover open-ended positive and negative instructions in different linguistic styles for images in the Visual Genome dataset. They use GPT4 to create instruction-following data with the image size, bounding boxes, and dense captions as input, and generate caption instances in both declarative and interrogative formats. Afterwards, they remove instances with answers longer than 30 words and those with unneeded content. This results a total of over 400k image-visual instruction pairs after filtering.\\n\\n\\n**Model Preprocessing.** After combining these three datasets, we finetune our entire model on it. Because we utilize pre-trained vision encoders and Mamba LLM, for data pre-processing, we also adopt the same tokenizer and image transforms from the respective components. For Mamba LLM, we utilize the GPTNeoXTokenizerFast tokenizer to convert input text into textual tokens. For both the SigLIP and DinoV2 ViTs, we use their respective image transforms, as listed below: 1. SigLIP: Resize to (384, 384), center crop, normalize by mean 0.5 and std 0.5. 2. DinoV2: Resize to (384, 384), center crop, normalize by mean 0.485 and std 0.225. We will add such details into our supplementary.\"}", "{\"summary\": \"This paper proposes a Mamba based integration for vision-language models. Similar to the VLM based framework where there are one vision encoder, one projector, and one LLM. The proposed method takes one image and its corresponding caption as input, followed by multi-scale feature extraction and fusion. A pixel-wise alignment loss is applied to the visual features for structural visual cues preservation, and an autoregressive NLP loss is applied to the textural feature. Experiments are conducted on several benchmark datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Pixel-wise alignment loss is interesting. The LLM outputs are text tokens and are further decoded into the image domain for measuring pixel-level similarity with the input image.\\n2. The hierarchical alignment via multi-scale fusion is developed to retain fine-grained visual features\\n3. The experimental results seem promising on the benchmark datasets.\", \"weaknesses\": \"1. While the main contribution is set for the loss design of model training, the proposed loss function seems rather general and not specifically for Mamba based structure. The general pixel-wise alignment loss seems functional for all VLMs, rather than only mamba structure. Can you clarify how the proposed loss function leverages or is tailored to the unique properties of Mamba architectures compared to transformer-based VLMs?\\n2. The inputs require both images and captions, how the captions are acquired are not illustrated well. A more detailed description of the data preparation process is desired here. Also, the decoded procedure via Mamba is not illustrated clearly. Is SFT or a similar finetuning process still needed for training VLM? Is related VQA data still required? Such technical details are not shown clearly in the manuscript to show the overall model tuning picture. It is better to provide a step-by-step explanation of the decoding procedure and clarification on the fine-tuning process to help address the ambiguity.\\n3. The pixel-alignment loss is set for structure preservation, the role of LLM here is not clear as the caption has been sent to it. So LLM seems to function as a single mapping to fulfill this objective loss. Can you elaborate on the specific role of the LLM in the context of the pixel-alignment loss? \\n4. Feature fusion is not motivated well and seems a general operation for representation enhancement. It would be better to provide a more detailed justification for the choice of feature fusion technique. Specifically, an explanation is expected of how your approach differs from or improves upon standard feature fusion methods, particularly in the context of Mamba-based architectures.\", \"questions\": \"Overall, the pixel-alignment loss design is interesting but there are many ambiguious technical details to make the contribution clear. Further clarification is required to better position the proposed method within the scope of the visual instruction tuning design of VLMs\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Q4. Can you elaborate on the specific role of the LLM in the context of the pixel-alignment loss?\", \"comment\": \"**Q4. The pixel-alignment loss is set for structure preservation, the role of LLM here is not clear as the caption has been sent to it. So LLM seems to function as a single mapping to fulfill this objective loss. Can you elaborate on the specific role of the LLM in the context of the pixel-alignment loss?**\\n\\nIn short, we want our pixel-wise alignment loss to directly enforce the LLM to focus not only on the generation of the textual response but also on learning better quality visual representations and preserving relevant visual information for multimodal alignment. As mentioned in the response to Q1, Mamba models are inherently weaker in processing visual features, especially in larger-scale models. In the context of LLMs, where parameters are in the scope of billions, we hypothesize that this ineffectiveness in processing visual information may severely impact the ability for Mamba models to align visual and textual features, necessitating additional supervision of visual information in Mamba MLLMs. \\n\\n\\n**The role of LLMs in conventional MLLMs.** Conventional MLLMs usually consists of a ViT, an MLP projector, and an LLM. The entire training process of MLLMs endows the LLM to accept not only textual information, but also visual information through the vision encoder, to produce textual responses based on multimodal input. Thus, the LLM here functions as a mapping from image space and text space to text space, optimized solely through a loss function that constrains the text space response. \\n\\n**Imbalance between visual and textual features.** We observe an inherent flaw in the optimization of current MLLMs. Despite handling both visual and textual data, the visual features within the network does not seem to be conditioned in any form. Combined with the observation of performance degradation of large-scale visual Mamba models, we extend the autoregressive loss from the text space to image space as well, to enable the linguistic capabilities of the LLM while preserving structural visual cues of the image, ultimately enabling better multi-modal alignment between the image and text modality. During training, the LLM here functions as a mapping from image space and text space to image space and text space, optimized through a loss function that constrains both textual responses and the structure of visual information. Thus, the LLM is trained to simultaneously generate a logical response to the input image and text data, while also able to preserve structural visual information, reducing visual hallucinations. During testing, the visual constraints are unnecessary, and discarded entirely.\"}", "{\"title\": \"Q2. Is cross-attention necessary for fusing intermediate features? Could simpler alternatives be explored to achieve the same goal?\", \"comment\": \"**Q2. Is cross-attention necessary for fusing intermediate features? Could simpler alternatives be explored to achieve the same goal?**\\n\\nTo summarize, we believe that cross-attention specifically is not necessary for fusing intermediate features, and simpler alternatives can be explored to achieve the same goal. However, given the small computational complexity compared to the overall model and the effectiveness of cross-attention in previous feature fusion works, we resort to the use of cross-attention to fuse intermediate features in our work. We do show that replacing the cross-attention with linear attention achieves similar performance gains.\\n\\n**Cross-attention background and overview.** The cross attention module [1] replaces key and value vectors in self-attention with inputs from different sequences, enabling the calculation of attention scores based on alternative information. Currently, cross-attention is commonly used in multimodal models for representation fusion, as evidenced by various methods from basic MLLM approaches like Flamingo [2] and EVLM [3] to advanced representation fusion MLLM methods such as LLAVA-UHD [4]. Cross-attention has demonstrated its capabilities in fusing both same-modal and multi-modal features in these works, and is the reason why we have also opted for cross-attention. \\n\\n**The use of attention-mechanisms in Mamba.** The use of the attention mechanism in Mamba-based architecture is not uncommon. The authors for the original Mamba [5] paper has provided a version of attention-augmented Mamba2 model in their follow-up work Mamba2 [6], which shows improved performances without introducing significant computational overhead. Furthermore, hybrid Mamba-transformer architectures such as Jamba [7,8], Zamba [9], and Griffin [10] have been proposed to combine the efficiency of Mamba and state-space models with the benefits of transformer-based attentions. \\n\\n**Cross-attention in EMMA.** We observe that current Mamba-based multimodal models exhibit a gradual loss of fine-grained visual cues in deeper layers of the MLLM. Thus, in an attempt to reduce this information loss, we propose to use a hierarchical alignment module to counteract this loss of information. The essence of using cross-attention in our work lies in merging information from multiple layers of representation, ensuring that information from intermediate layers is also involved in visual representation alignment to prevent loss of image information at intermediate layers. Therefore, cross-attention is not mandatory, but only serves as a means to merge layer-wise information. Thus, the replacement of it with more efficient and simpler attention mechanisms, or usage of different representation fusion modules can be explored. Nevertheless, due to the minimal number of parameters involved, there may not be a significant difference in computational efficiency, and hence not explored in this work. \\n\\n**Alternative Attention Mechanisms in EMMA.** As an example, we replace the cross-attention modules with lucidrains version [11] of linear attention [12,13] in the EMMA-V1 model. This enables linear-wise computation of attention, and achieves the following performances:\\n\\n| Model | GQA | VizWiz | VQA$^{T}$ | POPE | MME | MMB | HBench |\\n|------------|------|--------|-----------|------|-------|------|--------|\\n| EMMA-V1-CA | 60.5 | 52.1 | 57.2 | 88.0 | 1572.8| 53.2 | 51.0 |\\n| EMMA-V1-LA | 60.2 | 52.1 | 57.7 | 88.1 | 1560.0| 52.5 | 46.9 |\\n\\nEMMA-V1-CA denotes the original proposed model with cross-attention, while EMMA-V1-LA denotes EMMA but with linear attention instead. EMMA-V1-LA performs similarly to EMMA-V1-CA on the majority of benchmarks, besides HallusionBench, which experiences a 4\\\\% drop. This could be attributed to the trade-off of sub-quadratic complexity and overall performance of linear attention. Nevertheless, EMMA-V1-LA still outperforms the baseline on HallusionBench by 5\\\\%.\"}", "{\"summary\": \"Mamba has seen widespread use in large language models (LLMs) due to its exceptional efficiency. However, Mamba struggles with processing visual signals, limiting its application in multi-modal LLMs. To address this issue, this paper proposes improving Mamba\\u2019s ability to handle visual signals through structural and hierarchical alignment. The authors argue that Mamba\\u2019s difficulty with visual data stems from the lack of positional encoding, which causes the gradual loss of fine-grained spatial information during processing. To mitigate this, the paper introduces the EMMA method, which enhances the structural consistency between Mamba\\u2019s output visual features and the original image by reconstructing the image through a decoder. Additionally, EMMA incorporates cross-attention within Mamba, combining multi-level intermediate features via cross-attention to form the final output, thereby preventing the loss of fine-grained details in deeper layers. The results show that EMMA leads to improved performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The proposed method is simple yet effective.\"], \"weaknesses\": \"1. **Increased Complexity Due to Cross-Attention**:\\n EMMA introduces cross-attention operations into the Mamba network, reverting the model\\u2019s original sub-quadratic complexity back to quadratic complexity, which undermines the original intent of using Mamba. Could the authors provide the computational complexity of the cross-attention operations compared to the overall method? Is cross-attention necessary for fusing intermediate features? Could simpler alternatives be explored to achieve the same goal?\\n\\n2. **Training Costs and Fair Comparisons**: \\n EMMA involves additional training steps, which should be explicitly detailed in the paper. Furthermore, when comparing EMMA to other methods, the additional training costs should be listed separately to ensure a fair comparison (training time, GPU hours and memory requirement). Since EMMA alters the original MLLM architecture, its inference complexity will also increase. Table 4 should include a breakdown of the increased complexity due to hierarchical alignment. For fairness, Table 4 should also provide EMMA\\u2019s inference speed when using Mamba LLM-2.8B.\\n\\n3. **Limited Technical Contribution**: \\n EMMA\\u2019s approach is somewhat simplistic, as it merely enhances the consistency between deep and shallow features and the original image at the output stage. This method requires extra training, substantial computational resources, and introduces additional structures during deployment. As a result, the technical contribution appears somewhat trivial. It would be better if this paper could provide more detailed analysis of how their method addresses specific challenges in multimodal learning that previous methods have struggled with.\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"References, continued.\", \"comment\": \"24. David Mizrahi, Roman Bachmann, Oguzhan Kar, Teresa Yeo, Mingfei Gao, Afshin Dehghan, and Amir Zamir. 4m: Massively multimodal masked modeling. Advances in Neural Information Processing Systems, 36, 2024.\\n25. Yonatan Bitton, Gabriel Stanovsky, Michael Elhadad, and Roy Schwartz. Data efficient masked language modeling for vision and language. arXiv preprint arXiv:2109.02040, 2021.\\n26. Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, and Kurt Keutzer. How much can clip benefit vision-and-language tasks? arXiv preprint arXiv:2107.06383, 2021.\\n27. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp.8748\\u20138763. PMLR, 2021\"}", "{\"title\": \"Q3. Why do the current results show that the Mamba LLM still lags behind the transformer architecture? In this case, does the speed advantage gained by using the Mamba architecture sufficiently compensate for the performance loss?\", \"comment\": \"**Q3. Why do the current results show that the Mamba LLM still lags behind the transformer architecture? In this case, does the speed advantage gained by using the Mamba architecture sufficiently compensate for the performance loss?**\\n\\nIn short, we believe that Mamba-based LLMs do not lag behind the transformer architecture but rather show competitive performances with them, depending on the training procedure. We believe that the enhanced performance of some transformer-based models ultimately lies in the use of more and better-quality data. Furthermore, all Mamba-based models are at a disadvantage (in terms of training data) when compared with transformer-based models, and a completely fair comparison between them in the current stage is unavailable. I will divide my answer to this question into three separate segments: 1. Analysis of current quantitative results, 2. Analysis of transformer-based methods, and 3. Analysis of Mamba-based methods. \\n\\n\\n**Quantitative analysis of current results.** We believe that Mamba-based models offer competitive performances to the current suite of small-scaled transformer models. For instance, in Table 1 of the original paper, both versions of our model surpass the performance of LLaVA-Phi and MobileVLM on all metrics, except the MMB metric for EMMA-V2. Our performance is a bit weaker than MobileVLM v2, but we believe that this performance difference should be expected due to the use of different training datasets. We also surpass TinyLLaVA on all but the VQA-v2 and MMB metric. \\n\\n**Analysis of transformer-based methods.** 1. General facts about transformer-based LLMs. All transformer methods focus on two-stage training paradigm which consists of a pretraining phase and a finetuning phase. All backbone LLM was trained on 1.3T (MobileLLaMA) and 1.4T (Phi-2-2.7B) textural tokens prior to the MLLM pretrain and finetuning stage. 2. MobileVLM2. In this section, we focus on addressing the performance differences to MobileVLM v2, as opposed to the original MobileVLM. Architecturally, the improvement of MobileVLM to MobileVLM V2 is on the lightweight downsample projector, which replaces point-wise and depth-wise convolutions with average pooling and addition, mainly for performance gains. We suggest that the performance gain between these two models most likely lies in the amount of data used to pre-train and fine-tune both models. For MobileVLM, they utilize the same training corpora as LLaVA v-1.5, which consists of 1. the 558K subset of the LAION-CC-SBU dataset for pretraining and 2. the mixture of 665k dataset for finetuning, which includes samples from COCO, GQA, OCR-VQA, TextVQA, and Visual Genome. The total is around 1.2M data for the entire training procedure. For MobileVLM V2, they utilize a mixture of 3.6M data. This includes 1. 1.2M ShareGPT4V-PT data for pretraining, 2. A mixture of 2.4M data for finetuning, which includes samples from Visual Dialogue, TextVQA, VSR, VIGC, IConQA, SQA, COCO, SBU, and ShareGPT4V. As shown, MobileVLM V2 utilizes three times the amount of data, which may explain its significant performance gains over the original MobileVLM paper. \\n\\n\\n**Analysis of Mamba-based methods.** 1. General facts about Mamba-based LLMs. Both Cobra and EMMA utilize a single finetuning phase, as the inclusion of pretraining phase does not significantly affect model performance. To conduct a fair comparison with our baseline, we also utilize the same dataset as Cobra to conduct our experiments. Furthermore, all Mamba-based backbones, unlike their transformer counterparts, are trained on the SlimPj [19] (MambaV1) and Pile [20] dataset (MambaV2), with only 627B and 300B tokens respectively. Thus, we believe it is impossible to make a completely fair comparison between transformer and Mamba-based models unless the underlying LLM as well as the entire process of multi-modal adaptation of these models utilize the same exact set of data. However, due to limited computational power, we do not have the capability of training a Mamba LLM from scratch, and thus the completely fair comparison between these methods remain unavailable. Nevertheless, Mamba-based models are able to achieve comparable performances with transformer-based models, despite having overall less training data and less training stages.\"}", "{\"title\": \"Discussion on Q3\", \"comment\": \"Thank you for your clarification.\"}", "{\"title\": \"Q3. Continued.\", \"comment\": \"**Speed vs. Performance trade-offs.** In the current case where Mamba models experience worse performance compared to some transformer models, we believe the use of either model depends on the deployment case. For instance, given the 3-times faster inference speed of Mamba models, they are certainly more effective to be deployed in scenarios that require rapid responses, such as real-time translation, fraud detection and edge-device maneuvering. However, for platforms that are less time-demanding, transformer models may still be the solution given their reliability and performance gains. Nevertheless, our research aims to more effectively adapt Mamba-based models for application within the domain of MLLMs, thereby providing impetus for future investigations aimed at fully harnessing the latent capabilities inherent in the Mamba architecture.\"}", "{\"title\": \"A huge thanks to all reviewers, we look forward to hearing from you.\", \"comment\": \"We thank the reviewers again for their valuable feedback. We are grateful that most reviewers are quite affirmative of our overall contributions, including the novelty and motivation (\\\"Pixel-wise alignment loss is interesting\\\" by 5nFQ, \\\"Motivation Well Clarified\\u201d by T8Kc, \\\"The imbalance ... does make sense to investigate ...\\\" by yy1F), the extensiveness of empirical evaluation (\\\"The paper presents comprehensive experiments\\\" by T8Kc, \\\"The experiments are sufficient\\\" by yy1F), the effectiveness of the proposed method (\\\"simple yet effective\\\" by yCEA, \\\"experimental results seem promising\\\" by 5nFQ, \\\"robustness of the proposed approach\\\" by T8Kc, \\\"effectiveness of the proposed method\\\" by yy1F), and clarity and soundness of the paper (\\\"Well Written\\\" by T8Kc, \\\"well-written and easy to follow\\\" by yy1F).\", \"we_summarize_the_main_contributions_of_our_paper\": \"1. We observe and discover the difference in performance of mamba-based models in vision vs. language tasks, especially in large-scale settings. This renders the design of mamba-specific architectures in order to bring out the full potential of mamba models in MLLM tasks.\\n2. We are the first to extend from the vanilla MLLM framework for mamba models and propose mamba-specific modules to address the imbalance between visual and textual latents, enabling better multi-modal fusion of large-scaled mamba-based models. \\n3. Due to hierarchical and structural alignment, EMMA is able to achieve better performance than current Mamba-based MLLMs and competitive performance with transformer-based approaches with similar scales. Furthermore, our model demonstrates superior capabilities in reducing hallucinations, where it outperforms all current models <4B in POPE in the OpenCompass VLM leaderboard and outperforms all current models <10B in HallusionBench in the OpenCompass VLM leaderboard. In terms of inference speed, our model is 3 times faster than current similar-sized transformer models. This presents Mamba as a promising contender in the realm of efficient general AI models that require fast response speeds.\\n\\nWe have also made a number of comments to address all reviewers' suggestions and concerns. A short summary of the\", \"replies_are_made_as\": \"1. Computation analysis: We provide a comparison of each module in terms of computational cost during training and inference. Our newly proposed structural and hierarchical alignment only occurs during training and is discarded during the inference stage.\\n2. Training stage clarification: Our method only includes a supervised fine-tuning stage that tunes all LLM, MLP projector, and decoder parameters in an end-to-end fashion. A summary of the preprocessing of training data is also included in the revised paper.\\n3. Motivation clarification: We analyze mamba models both architecturally and empirically to demonstrate the imbalance in the quality of visual and textual latents. This phenomenon renders the need for more effective visual processing in large-scale Mamba models to ensure that visual and textual features are adequately aligned.\\n4. Additional visualizations: We include additional visualizations to further demonstrate the effectiveness of our method.\\n\\nSince the rebuttal deadline is incoming, please let us know if our replies have addressed your concerns. We are more than delighted to have further discussions and improve our manuscript. If our response has addressed your concerns, we would be grateful if you could re-evaluate our work.\"}", "{\"title\": \"Looking forward to further discussions!\", \"comment\": \"Dear Reviewer yCEA,\\n\\nThank you for your insightful comments. We were wondering if our response and revision have resolved your concerns. We have attempted to address your initial questions through our replies and are eager to clarify any further points you might raise. Please feel free to provide additional feedback. We greatly appreciate your continued engagement.\\n\\nBest regards,\\nAuthors\"}", "{\"title\": \"Looking forward to further discussions!\", \"comment\": \"Dear Reviewer 5nFQ,\\n\\nThank you for your insightful comments. We were wondering if our response and revision have resolved your concerns. We have attempted to address your initial questions through our replies and are eager to clarify any further points you might raise. Please feel free to provide additional feedback. We greatly appreciate your continued engagement.\\n\\nBest regards, Authors\"}", "{\"title\": \"Q2. Besides, does the masked image construction loss achieve a function similar to EMMA?\", \"comment\": \"**Q2. Besides, does the masked image construction loss achieve a function similar to EMMA?**\\n\\n**Difference between pixel-wise alignment loss and masked image construction loss.** We believe that the masked image reconstruction loss, which is presented in masked autoencoders [4] does not achieve similar results to EMMA. We analyze this in two aspects: form-wise, and training-wise. \\n\\nForm-wise, masked reconstruction loss $\\\\mathcal{L}_{mae}$ takes the form \\n\\n$$\\\\mathcal{L}_{mae}(Image\\\\_{masked}, \\\\mathrm{MAE}(Image\\\\_{masked}))$$\\n\\nwhereas our pixel-alignment loss $\\\\mathcal{L}_{pix}$ takes the form \\n\\n$$\\\\mathcal{L}_{pix}(Image\\\\_{full}, \\\\mathrm{EMMA}(Text\\\\_{full}, Image\\\\_{full}))$$\\n\\n$\\\\text{MAE}$ consists of an MAE encoder and an MAE decoder and can be expanded as \\n\\n$$\\\\mathrm{MAE}(Image\\\\_{masked}) = MAE_{decoder} (MAE_{encoder}(Image\\\\_{masked}))$$\\n\\nOn the other hand, $EMMA$ consists of a ViT vision encoder, an MLP projector, a Mamba LLM backbone, and a Mamba-based decoder, and can be expanded as \\n\\n$$\\\\mathrm{EMMA}(Text\\\\_{full}, Image\\\\_{full}) = EMMA_{decoder} (LLM_{Mamba}(MLP_{Proj}( ViT(Text\\\\_{full})), Image\\\\_{full}))$$\", \"we_make_two_observations\": \"Firstly, the inputs differ, where the pixel-alignment loss aligns full images with reconstructed full images, while MAE loss aligns masked images with reconstructed full images. Secondly, the model architectures differ, where the pixel-alignment loss operates on a MLLM structure that also incorporates textual information for the reconstruction of images, while the MAE loss operates on a simple visual encoder-decoder structure.\\n\\nTraining-wise, masked reconstruction loss is first pre-trained on masked-out images and then finetuned on specific downstream datasets (with unmasked full image patches) for final evaluation. This introduces additional training steps and requires more training data. On the other hand, EMMA completely skips the pretraining phase and directly adds in the pixel-wise alignment objective during the finetuning process, requiring no additional training stages and data for the joint optimization of pixel-wise alignment loss and text autoregressive loss.\"}", "{\"comment\": \"Thank you for your responses. I have no further concerns and will maintain the initial score (6: borderline accept) for your paper.\"}", "{\"title\": \"Q5. Authors are suggested to display more qualitative results.\", \"comment\": \"**Q5. Authors are suggested to display more qualitative results.**\\n\\nWe have included 3 more qualitative results showcasing the visualization of intermediate features and model responses, for a total of 5 visualizations in the appendix of the paper (we have resubmitted the new revised paper to openreview). These visualizations demonstrate that EMMA is able to retain focus on important visual details even during later intermediate layers, resulting in better sensitivity to fine-grained visual details and less visual hallucination.\"}", "{\"title\": \"Q1. Continued.\", \"comment\": \"**Visual and textual latents in transformer-based models.** We argue that the issue of significant degradation in performance in visual task when scaled-up is unique to Mamba-based models, which requires more immediate remedies compared to transformer-based approaches to enable the full potential of the adaptation of Mamba models to MLLMs. Transformer models do not experience this scaling issue in vision or language tasks. However, given that typical MLLM approaches do not constrain visual inputs in any form, an autoregressive loss in visual tokens still demonstrate performance improvements in the EMU series and other works [9, 10, 11, 12]. Nevertheless, these approaches utilize complicated decoder structures that require additional training stage and data to effectively achieve visual self-supervision of LLMs. On the other hand, we show that a simple Mamba-based decoder is effective in achieving visual supervision, enabling our model to grasp more intricate visual details, which results in less visual hallucinations and better overall performance.\"}", "{\"comment\": \"Dear Reviewer T8Kc,\\n\\nCould you kindly review the rebuttal thoroughly and let us know whether the authors have adequately addressed the issues raised or if you have any further questions.\\n\\nBest,\\n\\nAC\"}", "{\"title\": \"Looking forward to further discussions!\", \"comment\": \"Dear Reviewer yy1F,\\n\\nThank you for your insightful comments. We were wondering if our response and revision have resolved your concerns. We have attempted to address your initial questions through our replies and are eager to clarify any further points you might raise. Please feel free to provide additional feedback. We greatly appreciate your continued engagement.\\n\\nBest regards, Authors\"}", "{\"title\": \"Q5. Since EMMA alters the original MLLM architecture, its inference complexity will also increase.\", \"comment\": \"**Q5. Since EMMA alters the original MLLM architecture, its inference complexity will also increase. Table 4 should include a breakdown of the increased complexity due to hierarchical alignment. For fairness, Table 4 should also provide EMMA\\u2019s inference speed when using Mamba LLM-2.8B.**\\n\\n**Alterations to inference speed.** Because both the structural alignment and hierarchical alignment consider visual features, which are used to boost the quality of visual latents and encourage better multi-modal alignment during the *training* stage, they are discarded during inference as they are not required for visual question answering. Thus, as shown in the provided component analysis table, they do not introduce additional complexity during inference. \\n\\n**EMMA-V1 Inference Time.** As shown in the table for Q1, during inference, the decoder is discarded, and only the original LLM outputs are considered. Thus, the reason why EMMA-V1\\u2019s inference speed is not reported is due to the fact that both EMMA-V1 and Cobra uses the Mamba LLM-2.8B backbone, so they should have the same inference speeds. We will include the EMMA-V1 inference time in the final manuscript.\"}", "{\"title\": \"Q4. Currently, models use ViT as the image encoder. Is it possible to build an MLLM entirely based on Mamba? What potential advantages and challenges might this approach entail?\", \"comment\": \"**Q4. Currently, models use ViT as the image encoder. Is it possible to build an MLLM entirely based on Mamba? What potential advantages and challenges might this approach entail?**\\n\\n\\n**Fully Mamba-based MLLM.** The potential advantage of a fully Mamba-based model is the further improvement in inference speed given the Mamba architecture. While we believe that it is technically possible to build an MLLM entirely based on Mamba, this is not achievable given the currently available models, due to the lack of large pretrained visual Mamba models. As you may know, the typical framework of multimodal LLMs consists of a pretrained visual encoder, a multimodal projector, and a pretrained backbone LLM. However, the existence of a pre-trained large visual Mamba remains unavailable. In fact, the largest off-the-shelf pretrained visual Mamba models [4, 5] are less than 100M in parameters. It is possible to directly replace the visual encoder and projector with Mamba layers, however, jointly training a newly-initialized vision encoder with the LLM backbone results in terrible performances across all metrics. As a preliminary study for this paper, we have trained a version of Cobra with the vision encoder replaced by a newly initialized VMamba encoder (around 700M in parameter). Evaluation on the GQA benchmark demonstrated a 35\\\\% drop in performance, suggesting that a pretrained Mamba-based visual encoder is necessary for the adaption of a fully-Mamba based MLLM.\"}", "{\"title\": \"References.\", \"comment\": \"1. Alexey Dosovitskiy. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.\\n2. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012.\\n3. Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Donald Metzler. Long range arena: A benchmark for efficient transformers. arXiv preprint arXiv:2011.04006, 2020.\\n4. Yue Liu, Yunjie Tian, Yuzhong Zhao, Hongtian Yu, Lingxi Xie, Yaowei Wang, Qixiang Ye, and Yunfan Liu. Vmamba: Visual state space model. arXiv preprint arXiv:2401.10166, 2024.\\n5. Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, and Xinggang Wang. Vision mamba: Efficient visual representation learning with bidirectional state space model. arXiv preprint arXiv:2401.09417, 2024.\\n6. Sucheng Ren, Xianhang Li, Haoqin Tu, Feng Wang, Fangxun Shu, Lei Zhang, Jieru Mei, Linjie Yang, Peng Wang, Heng Wang, et al. Autoregressive pretraining with mamba in vision. arXiv preprint arXiv:2406.07537, 2024.\\n7. Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023.\\n8. Tri Dao and Albert Gu. Transformers are ssms: Generalized models and efficient algorithms through structured state space duality. arXiv preprint arXiv:2405.21060, 2024.\\n9. Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Emu: Generative pretraining in multimodality. In The Twelfth International Conference on Learning Representations, 2023a.\\n10. Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Generative pretraining in multimodality. arXiv preprint arXiv:2307.05222, 2023b.\\n11. Quan Sun, Yufeng Cui, Xiaosong Zhang, Fan Zhang, Qiying Yu, Yueze Wang, Yongming Rao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Generative multimodal models are in-context learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14398\\u201314409, 2024.\\n12. Fei Zhao, Taotian Pang, Chunhui Li, Zhen Wu, Junjie Guo, Shangyu Xing, and Xinyu Dai. Aligngpt: Multi-modal large language models with adaptive alignment capability. arXiv preprint arXiv:2405.14129, 2024.\\n13. Han Zhao, Min Zhang, Wei Zhao, Pengxiang Ding, Siteng Huang, and Donglin Wang. Cobra: Extending mamba to multi-modal large language model for efficient inference. arXiv preprint arXiv:2403.14520, 2024a.\\n14. Guanqun Wang, Xinyu Wei, Jiaming Liu, Ray Zhang, Yichi Zhang, Kevin Zhang, Maurice Chong, and Shanghang Zhang. Mr-mllm: Mutual reinforcement of multimodal comprehension and vision perception. arXiv preprint arXiv:2406.15768, 2024.\\n15. Alaaeldin El-Nouby, Michal Klein, Shuangfei Zhai, Miguel Angel Bautista, Alexander Toshev, Vaishaal Shankar, Joshua M Susskind, and Armand Joulin. Scalable pre-training of large autoregressive image models. arXiv preprint arXiv:2401.08541, 2024.\\n16. Bo Li, Peiyuan Zhang, Jingkang Yang, Yuanhan Zhang, Fanyi Pu, and Ziwei Liu. Otterhd: A high-resolution multi-modality model. arXiv preprint arXiv:2311.04219, 2023.\\n17. Xiangyu Zhao, Xiangtai Li, Haodong Duan, Haian Huang, Yining Li, Kai Chen, and Hua Yang. Mg-llava: Towards multi-granularity visual instruction tuning. arXiv preprint arXiv:2406.17770, 2024b\\n18. Zonghao Guo, Ruyi Xu, Yuan Yao, Junbo Cui, Zanlin Ni, Chunjiang Ge, Tat-Seng Chua, Zhiyuan Liu, and Gao Huang. Llava-uhd: an lmm perceiving any aspect ratio and high-resolution images. In European Conference on Computer Vision, pp. 390\\u2013406. Springer, 2025.\\n19. Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, et al. Slimpajama-dc: Understanding data combinations for llm training. arXiv preprint arXiv:2309.10818, 2023.\\n20. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020.\"}", "{\"title\": \"Further discussion on Q1\", \"comment\": \"Thanks for your reply! Maybe I don't express my concerns for Q1 clearly. My apologizes. Here I further express my concerns.\", \"q1\": \"Pixel-wise alignment loss in your is essentially proposed to enhance the visual representation ability of the model. The contrastive loss in CLIP and SigLIP is also proposed to enhance the visual representation. I am curious about the comparison experiments between them, i.e., replacing the pixel-wise alignment loss with that in CLIP or SigLIP directly, and other components in your method are not required to be changed.\\n\\nHope further clarification can help you understand my concerns.\"}", "{\"comment\": \"Thanks for the clarification, and most of my concerns have been resolved. I'd suggest the authors to incorporate more motivations/rationales in their camera-ready version if the paper got accepted.\"}", "{\"title\": \"Reply to Further discussion on Q1\", \"comment\": \"Thank you for the clarification! Due to limited time and computational resources, we might not be able to conduct a full range of experiments on all metrics. However, as a preliminary study for this paper, we have actually tried directly applying the CLIP loss to our baseline model, where the CLIP loss aligns the textual and visual embeddings in the mamba LLM. However, we observe that the contrastive loss, as well as other transformer-based techniques for multimodal alignment, fails to offer improvement in the scenario of Mamba-based LLMs, as evidenced in their performance in TextVQA (See *Contrastive* in the below table):\\n\\n| Model | EMMA-V1 | EMMA-V2 | Cobra | Compression | Masked | Contrastive |\\n|--------------|---------|---------|-------|-------------|--------|-------------|\\n| TextVQA | 57.2 | 56.2 | 52.4 | 45.8 | 46.1 | 50.2 |\\n\\nHere Compression denotes visual feature compression techniques (TokenPacker), Masked denotes Masked Language Modeling, and Contrastive denotes contrastive vision-language loss as in CLIP. As shown, using a CLIP loss reduces the baseline Cobra model from 52.4 to 50.2. \\n\\n**Hypothesis on CLIP loss performance in Mamba MLLMs.** We hypothesize that the reason behind this phenomenon is the fact that CLIP only strengthens the alignment between textual and visual features rather than boosting the quality of visual features, such as EMMA. Due to the inherent imbalance of visual and textual latents of Mamba models, this results in further reduced performance, where high-quality textual features are further aligned with low-quality image features.\\n\\nIn the final accepted version of the paper, we will include more experiments detailing this issue, as a new section in the appendix.\"}", "{\"title\": \"Thank you for your comments!\", \"comment\": \"Thank you again for your comments and suggestions for our paper. We greatly appreciate your input to our paper with respect to additional analysis of losses and incorporation of high-resolution dataset results. We will make sure to further polish our paper based on these aspects in the final version, if accepted to the conference.\"}", "{\"title\": \"Q5. Feature fusion is not motivated well.\", \"comment\": \"**Q5. Feature fusion is not motivated well and seems a general operation for representation enhancement. It would be better to provide a more detailed justification for the choice of feature fusion technique. Specifically, an explanation is expected of how your approach differs from or improves upon standard feature fusion methods, particularly in the context of Mamba-based architectures.**\\n\\n\\n**Mamba-based feature-fusion methods.** To our knowledge, current Mamba-based feature fusion methods include MambaDFuse [20], fusion-Mamba [21] and FusionMamba [22]. MambaDFuse proposes a dual-phase feature fusion module that consists of shallow fuse and deep fuse modules. The shallow fuse module achieves lightweight exchange of features from multiple modalities, while the deep fuse module, a newly designed Multi-modal Mamba (M3) block, guides the generation of modality-fused features and incorporates local detail characteristics from different modalities. Fusion-Mamba proposes a Fusion-Mamba block (FMB) to map cross-modal features into a hidden state space for interaction, thereby reducing disparities between cross-modal features and enhancing the representation consistency of fused features. FusionMamba designs a Mamba model suitable for fusion tasks by integrating the Visual State Space Model with dynamic convolution and channel-wise attention.\\n\\n\\n**Improvements from previous Mamba-based methods.** We believe the novelty of our work lies in two perspectives. First, we are the first to examine feature-fusion of Mamba models in the context of multimodal LLMs. Previous feature fusion works only focus on multi-stream multimodal models which do not utilize a Mamba-based LLM, but rather consist of separate branches with similar sizes to encode both modalities. Thus, the differences in underlying model architectures renders different design choices for feature fusion methods. Previous Mamba-based multimodal LLM methods, on the other hand, adapts the Mamba LLM to typical MLLM frameworks without considering feature fusion. Our work is the first that considers feature fusion in MLLMs to boost the quality of visual features to ensure better multimodal alignment, justified with noticeable performance gains empirically. Secondly, we are the first to examine feature-fusion of same modality features in Mamba-based models. Previous feature fusion works only consider the fusion of multimodal features, rather than augmenting useful information of same modality features from different scales. Our work proposes a novel Mamba-based multi-scale feature fusion module that effectively combines visual features from different layers, alleviating the gradual loss of visual information in the Mamba LLM.\"}", "{\"title\": \"References\", \"comment\": \"1. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances\\nin neural information processing systems, 36, 2024\\n2. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp. 8748\\u20138763. PMLR, 2021.\\n3. Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 11975\\u201311986, 2023.\\n4. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll\\u00b4ar, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16000\\u201316009, 2022.\\n5. Bo Li, Peiyuan Zhang, Jingkang Yang, Yuanhan Zhang, Fanyi Pu, and Ziwei Liu. Otterhd: A high-resolution multi-modality model. arXiv preprint arXiv:2311.04219, 2023.\\n6. Kun Li, George Vosselman, and Michael Ying Yang. Hrvqa: A visual question answering benchmark for high-resolution aerial images. ISPRS Journal of Photogrammetry and Remote Sensing, 214:65\\u201381, 2024.\"}", "{\"title\": \"Q3. EMMA uses the visual features from the vision encoder as the target of the Pixel-wise Alignment Loss. However, visual features from the vision encoder may lose the fine-grained information. Does this problem affect the performance of EMMA?\", \"comment\": \"**Q3. EMMA uses the visual features from the vision encoder as the target of the Pixel-wise Alignment Loss. However, visual features from the vision encoder may lose the fine-grained information. Does this problem affect the performance of EMMA?**\\n\\nEMMA uses the original image and the visual portion of the LLM output for pixel-wise alignment. Your insight is completely correct \\u2013 if we replace the original image with visual features from the vision encoder to conduct pixel-wise alignment, where the LLM visual features are aligned with the visual features from the vision encoder, this results in severe performance degradation and negatively affects the performance of EMMA, presumably due to the loss of fine-grained information, as you pointed out. The ablation study in section 4.4 Pixel vs. Feature Alignment showcases this problem. Thus, to mitigate the loss of fine-grained visual information, we align the original image with the LLM visual feature in a pixel-wise fashion, thus the name pixel-wise alignment loss.\"}", "{\"title\": \"Q1. Can you clarify how the proposed loss function leverages or is tailored to the unique properties of Mamba architectures compared to transformer-based VLMs?\", \"comment\": \"**Q1. Can you clarify how the proposed loss function leverages or is tailored to the unique properties of Mamba architectures compared to transformer-based VLMs?**\\n\\nTo answer this question, we present more background information detailing the motivation of our work, and how our work differentiates itself from its precursor. Then, we answer the question of whether the proposed loss function leverages or is tailored to the unique properties of Mamba architectures.\\n\\n**Clarification of Motivation.** Our research is grounded in the observation that Mamba models exhibit an imbalance in handling visual and textual information. We argue for the imbalanced quality of visual and textual latents in Mamba models through two perspectives, architecturally and empirically. \\n\\n\\n**Architectural imbalance.** The inherent linear scanning mechanism of Mamba poses challenges in effectively extracting visual features on the same level as textual features. On one hand, transformer-based [1] approaches utilize the self-attention mechanism that extracts global relationships between different parts of the input sequence. They also utilize a position encoding for learning spatial relationships of visual tokens. On the other hand, CNN-based approaches [2] utilize local receptive fields to connect each neuron to a local region of the input data, allowing interaction of neighboring pixels in all directions. The effectiveness of both methods in processing visual information lies in their ability to account for global, or to the least, local clusters of the visual input. However, Mamba utilizes a selective-scan mechanism that only processes sequences in a linear order. While Mamba models achieve excellent performances in textual data, effectively capturing long-term dependencies in multiple language benchmarks [3], Mamba models possess difficulties in handling image data, where pixel-wise relationships may span multiple rows and columns. Visual Mamba models have been proposed to counteract this issue through combining additional scans from different directions [4] and utilizing a similar position encoding as ViTs [5]. However, they nevertheless still utilize the linear selective-scan mechanism. In the case where features possess large embedding dimensions, which is typical in MLLM scenarios, this becomes hard for the model to extract visual cues that are far apart spatially, as the degree of freedom between each visual token across different rows and columns become huge. \\n\\n\\n\\n\\n**Empirical imbalance.** Mamba models experiences an imbalance in the empirical evaluation of vision tasks and textual tasks, when scaled up in parameters. [6] observes the performance degradation of visual Mamba models when scaled up. Below is a table taken directly from their paper [7, 8].\\n\\n| Model | VIM-B | VIM-L | VIM-H |\\n|--------------|-------|-------|-------|\\n| Size (M) | 98 | 340 | 755 |\\n| ImageNet Acc.| 81.2 | 81.0 | Collapsed |\\n\\n\\nAs shown, plain Mamba models experience a severe degradation in processing visual features when scaled up. In the 700M parameter-scale, the visual Mamba collapses entirely, rendering the need for effective approaches to alleviate this performance degradation, specifically in Mamba-based models. On the other hand, plain Mamba models do not experience performance degradation in processing textual data, as shown from the experiment results in the original Mamba papers [7, 8]. \\n\\n| Model | Mamba-790M | Mamba-1.4B | Mamba-2.8B | Mamba2-780M | Mamba2-1.3B | Mamba2-2.7B |\\n|--------------|------------|------------|------------|-------------|-------------|-------------|\\n| Size (M) | 790 | 1,400 | 2,800 | 780 | 1,300 | 2,700 |\\n| HellaSwag | 55.1 | 59.1 | 66.1 | 54.9 | 59.9 | 66.6 |\\n| LAMBADA | 62.7 | 64.9 | 69.2 | 61.7 | 65.7 | 69.7 |\\n\\nThus, we observe an imbalance of the processing capabilities of Mamba on visual and textual inputs, when scaled up in billion-parameter models. This phenomenon renders the need for more effective visual processing in large-scaled Mamba models to ensure that visual and textual features are adequately aligned.\\n\\n**Is pixel-wise alignment unique to Mamba?** The answer to this question is Yes and No, depending on different perspectives. We will answer this question in two different perspectives to showcase the meaningfulness of our research in the grand schema of MLLMs.\"}", "{\"title\": \"Further Discussion on Q4 and Q5\", \"comment\": \"Thank you again for your reply to our comments.\\n\\nQ4. Thank you for your further clarifications. \\n**Difficulties in making fair comparisons.** We believe that with the current architecture of our model, fair comparisons with these methods in high resolution are hard to achieve. Mini-Gemini proposes a dual vision encoder that contains an adaptive CNN-based high-resolution (HR) encoder and a transformer-based low-resolution (LR) encoder. The high-resolution features are obtained by upsampling and concatenating features from different convolution stages, which are unconstrained by input shape. On the other hand, Token Packer proposes a dynamic image slicing scheme to flexibly handle higher resolution images to convert into their ViT-friendly 336x336 dimensions. A commonality of their approaches is that they allow **adaptable image sizes** either through more flexible convolution layers or through dynamic slicing. On the other hand, EMMA utilizes a fixed DinoV2 and SigLIP visual encoder (both take 336x336 pixels). We also do not include an adaptive slicing technique as designed by TokenPacker. Thus, it becomes extremely difficult to evaluate high-resolution images in our current setting, which requires a complete re-training of our model through a different adaptive visual pipeline. \\n\\n**Sensitivity to fine-grained detail vs. high resolution.** The inspiration behind proposing the multi-scale fusion module for preserving fine-grained detail comes from the observation that Mamba models tend to gradually lose fine-grained visual information. Thus, our model is mostly concerned with reducing this information loss in the general image domain and not about image resolution. In fact, the degradation of visual features is more significant in lower resolutions and better reflects the capability of our model to retain fine-grained visual features without losing them. \\n\\n**Issue with current high-resolution benchmarks.** Current methods utilize the same low-resolution VQA tasks and simply upsample the images as high-resolution. These images are still low-resolution to begin with, and may still fail to contain fine-grained visual information even when upsampled to high resolutions. We sample a range of images from the GQA, VQA-v2, TextVQA, VSR, and POPE benchmarks and show that the images are (333, 500), (640, 480), (943, 1024), (480, 640), (426, 640) in dimension. These resolutions, besides the images from the TextVQA dataset, are far from the reported upsampled 1088, 1344, and 1536 dimensions. Thus, we believe that naturally high-resolution images are needed to fully validate the performance of high-resolution images. \\n\\n**Attempt to show results in high-resolution.** Given the limited time and computation budget, we will try our best to show our model performances on high-resolution images. Because the resolution for our model is fixed, due to the nature of frozen ViT, a quick solution could entail slicing a high-resolution image into a fixed number of slices and taking the average performance on all such slices as the final result. We will use the remaining time to perform this experiment and update on OpenReview as soon as possible. In the case where we do not finish in time, we will continue running these experiments and include them in the final accepted version of the manuscript.\"}", "{\"title\": \"Q5. In transformer-based LLMs, is there a similar issue with pixel-level alignment? Would the method proposed in the paper also be applicable to transformer architectures?\", \"comment\": \"**Q5. In transformer-based LLMs, is there a similar issue with pixel-level alignment? Would the method proposed in the paper also be applicable to transformer architectures?**\\n\\nPlease see our reply to Q1.\"}", "{\"metareview\": \"The paper introduces Empowering Multi-modal Mamba with Structural and Hierarchical Alignment (EMMA), enhancing Mamba multi-modal large language models (MLLM) by improving their ability to extract fine-grained visual information. This paper is well written and easy to understand. Most of reviewers point out that the pixel-wise alignment loss is interesting and useful, which is demonstrated by doing extensive experiments and showing good performance. The authors are required to update their final version of papers considering the reviews. All reviewers agree to accept this paper, thus I lean to make the recommondation of acceptance.\", \"additional_comments_on_reviewer_discussion\": \"Many concerns are raised and it seems like these concerns are well addressed during the period of rebuttal.\"}" ] }
EuoHhIqvRD
Is Synthetic Data Ready for Improving Visual Grounding?
[ "Ruozhen He", "Ziyan Yang", "Paola Cascante-Bonilla", "Alexander C. Berg", "Vicente Ordonez" ]
This paper extensively investigates the effectiveness of synthetic training data to improve the capabilities of vision-and-language models for grounding textual descriptions to image regions. We explore various strategies to best generate image-text pairs and image-text-box triplets using a series of pretrained models under different settings and varying degrees of reliance on real data. Through comparative analyses with synthetic, real, and web-crawled data, we identify factors that contribute to performance differences, and propose SynGround, an effective pipeline for generating useful synthetic data for visual grounding. Our findings show that SynGround can improve the localization capabilities of off-the-shelf vision-and-language models and offers the potential for infinite data generation. Particularly, SynGround improves the pointing game accuracy of pretrained ALBEF and BLIP models by 4.81% and 17.11% absolute percentage points, respectively, across the RefCOCO+ and the Flickr30k benchmarks.
[ "Visual Grounding", "Referring Expression Comprehension", "Learning from Models", "Synthetic Data" ]
https://openreview.net/pdf?id=EuoHhIqvRD
https://openreview.net/forum?id=EuoHhIqvRD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rD7rnLJT5B", "e1lL2lz3ew", "PIv2fYiwtG", "9RLsH48SiY", "00SxSAZi9Y" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730780431059, 1731655873100, 1730698575117, 1729271887952, 1730236307968 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1676/Reviewer_pC1Y" ], [ "ICLR.cc/2025/Conference/Submission1676/Authors" ], [ "ICLR.cc/2025/Conference/Submission1676/Reviewer_zEjc" ], [ "ICLR.cc/2025/Conference/Submission1676/Reviewer_4VzW" ], [ "ICLR.cc/2025/Conference/Submission1676/Reviewer_Jhhz" ] ], "structured_content_str": [ "{\"summary\": \"This paper studies a critical problem about data synthesis for improving visual grounding capabilities of vision and language models. It explores various strategies for generating synthetic image-text pairs and image-text-box triplets to enhance model training, comparing synthetic data with real and web-crawled data. The proposed SynGround pipeline demonstrates that synthetic data can effectively improve the localization capabilities of existing models. Notably, SynGround boosts pointing game accuracy for models like ALBEF and BLIP on benchmarks like RefCOCO+ and Flickr30k, showing the potential of synthetic data for scalable improvements in visual grounding tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Visual grounding is an essential problem with current vision and language models. It's important to study an effective approach to build synthetic data to further scale up models' visual grounding capabilities. This paper is one of the approaches that study how to generate such data, and with comparisons of various approaches to generate such data.\\n2. SynGround improves the pointing game accuracy of pretrained ALBEF and BLIP significantly.\", \"weaknesses\": \"1. Previous synthetic visual grounding dataset are missing, for example, GRIT data - \\\"a Ground-and-Refer Instruction-Tuning dataset with 1.1M samples.\\nGRIT contains multiple levels of spatial knowledge, covering objects, relationships, region descriptions, and complex reasoning\\\" - proposed in Ferret is not compared with. It's not clear how proposed SynGround is differing from previous synthetic visual grounding data, and how it surpasses previous data generation approaches.\\n2. The main tables lack important SOTA baselines, for example, Shrika and Ferret on RefCOCO+ and Flickr, which are a lot better than the model fine-tuned on SynGround on RefCOCO+, and similar on Flickr.\\n3. In Table 1, also the proposed approach get average 0.36 marginal improvement, and also no better than directly fine-tuning on existing VG data, which get average 0.96 improvement.\", \"questions\": \"1. The selection of base vision and language models. Why not applied to more recent SOTAs. Is SynGround data benefiting recent SOTAs in visual grounding also?\\n2. How does SynGround compare with exisiting visual grounding data collected from public source with bounding boxes synthesized as well?\", \"flag_for_ethics_review\": \"['Yes, Discrimination / bias / fairness concerns', 'Yes, Legal compliance (e.g., GDPR, copyright, terms of use)', 'Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": \"1. Discrimination / bias / fairness: Images with human-generated content may reflect biases, leading to fairness concerns in model training and predictions.\\n2. Legal compliance: Images containing identifiable human features may raise GDPR and copyright concerns if used without consent or proper authorization.\\n3. Responsible research: Releasing datasets with human-generated images requires careful handling to protect privacy and prevent potential misuse, especially if individuals are recognizable.\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We thank all the reviewers for their time and comments. We are withdrawing the paper from consideration and will address weaknesses in our next revision.\\n\\nHere are some issues we are currently addressing including clarifications and experiments.\\n\\n**Zero-Shot Out-of-the-Distribution Visual Grounding.** This paper adopts the zero-shot training setting that trains the visual grounding methods on collected image-text-boxes and evaluates it on task-specific benchmarks, such as RefCOCO+. The leaderboard and models such as \\u201cOFA\\u201d mentioned by the reviewers are finetuned on the in-domain training set (e.g., RefCOCO+ training split) and evaluated on its testing split. The in-domain training should achieve better performance, but we posit that the out-of-distribution data is more accessible and can examine the generalization ability of methods.\\n\\n**Pointing Game Accuracy with Heatmaps vs. Accuracy with Boxes.** There are two standard settings for visual grounding, and they have different advantages. Heatmap visualizes the model\\u2019s attention, making grounding theoretically more closely where the models look. Not only is it more explainable, but it is more flexible compared to bounding boxes in terms of multiple objects or background regions. However, comparing the absolute value of pointing game accuracy with the Accuracy\\\\@0.5 is unfeasible.\\n\\n**Why AMC (ALBEF)?** To the best of our knowledge, AMC [CVPR 2023], which adopts ALBEF as the backbone model, is still the state-of-the-art zero-shot grounding method under the pointing game accuracy and without finetuning on the training split for individual downstream datasets. We provide several other backbones to verify the effectiveness of our synthetic data and the generalization of analysis. Additionally, we want to generate effective synthetic data for both unsupervised (image-text pairs) and supervised (image-text-box triplets) learning. It is non-trivial to select a backbone that can be used for both supervised and unsupervised training (See Appendix B).\\n\\n**Finetuning SotA Box-based Acc\\\\@0.5 Model (OFA).** Our synthetic data can improve both heatmap-based and box-based grounding methods. The Acc\\\\@0.5 SotA at RefCOCO+, OFA, mentioned by the reviewers, was trained on the RefCOCO+ training split and tested on the RefCOCO+ testing split. Here, we evaluated the zero-shot grounding performance on RefCOCO+ and finetuned it for out-of-distribution zero-shot performance. The off-the-shelf OFA-Base without finetuning on RefCOCO+ is much lower than the in-domain (row 1) training-testing result. However, our synthetic data improves OFA dramatically and comes close to VG finetuning.\\n\\n| | Finetuning Data | RefCOCO+ Val | RefCOCO+ Test A | RefCOCO+ Test B |\\n|-----------------|----------|-------|--------|--------|\\n| OFA_Base | RefCOCO+ | 81.39 | 87.15 | 74.29 |\\n| OFA_Base | - | 29.78 | 31.24 | 27.82 |\\n| OFA_Base | VG | 54.29 | 59.52 | 48.19 |\\n| OFA_Base | SynGround| 49.53 | 52.31 | 45.37 |\\n\\n**LLaVA for Visual Grounding.** LLaVA does not provide a downstream application for visual grounding, and there is no straightforward approach to using LLaVA for this purpose. Unlike VLMs that use image-text matching or contrastive loss for attention map extraction, LLaVA is trained with an autoregressive loss, making it unclear how to extract a GradCAM explanation. Adapting LLaVA for visual grounding would require significant modifications, such as integrating an additional box decoder or adding location tokens during training, which are beyond the scope of our research on data synthesis. \\n\\nNotably, the LLaVA model adopts the CLIP image encoder, the same as ALBEF, BLIP, and METER. By experimenting with ALBEF, BLIP, and METER, we demonstrate the effectiveness of our synthetic data through extensive experiments, potentially indicating that grounding improvements could be achieved for LLaVA if its structure is modified to suit the grounding task.\\n\\n\\n**Computation Costs.** The data scale of our synthetic data is at approximately 100K images and 1M text-box pairs. Below are the computation speeds tested on a single NVIDIA A40 GPU. The entire image-text-box synthesis takes 501 hours = 20.88 days on a single card.\\n- Image caption generation (LLaVA): 5.71s/it *100K = 158 GPU hours. \\n- Image synthesis (Stable diffusion): 4.85s/it * 100K = 135 GPU hours. \\n- Text synthesis (LLM): 0.52s/it * 1M = 144 GPU hours. \\n- Box synthesis (GLIP): 0.23s/it * 1M = 64 GPU hours. \\n\\nQuote from VG paper [3]: The dataset was curated with contributions from over 33,000 unique workers over six months, following 15 months of experimentation and refinement of data representation. \\n\\n**Effectiveness at the Same Efforts: Synthetic Data vs. Real Data.** Compared to the image-text pairs, image-text-box triplets are more laborious to curate. The scale for the existing image-text-box dataset is much smaller than the image-text datasets (e.g., LAION5B). \\nWithin the same data curation (collection & annotation) time period, SynGround\\u2019s 20.88 GPU days are 1/9 of VG\\u2019s data curation time from 33,000 unique workers. Below, we provide comparisons between our synthetic data and 1/9 VG data. Their performance is on par. Plus, the scaling up observed in Sec. 3.8. Except for the analysis and findings from the data synthesis in our paper, SynGround provides a potentially feasible way to curate image-text-boxes at scale. \\n\\n | | Data | RefCOCO+ Test A | RefCOCO+ Test B | Flickr30k | Avg \\u0394|\\n|--------------|----------|-----------|-------|--------|--------|\\n| Off-the-Shelf| - | 69.35 | 53.77 | 79.38 | - |\\n| SynGround | Synthetic| 73.70 | 56.35 | 86.89 | +4.81 |\\n| 1/9 VG | Real | 76.96 | 59.07 | 85.01 | +6.18 |\"}", "{\"summary\": \"This paper investigates the effectiveness of synthetic training data to improve the capabilities of vision-and-language models for grounding textual descriptions to image regions. They propose SynGround, an effective pipeline for generating useful synthetic data for visual grounding. Particularly, SynGround improves the pointing game accuracy of pretrained ALBEF and BLIP models by 4.81% and 17.11% absolute percentage points, respectively, across the RefCOCO+ and the Flickr30k benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper provides a thorough experiments with different strategies to best generate image-text pairs and image-text-box triplets using a series of pretrained models under different settings and varying degrees of reliance on real data.\", \"weaknesses\": \"I really like the analysis in this paper. However, I'm confused about EFFECTIVENESS AND GENERALIZATION ON OTHER VLMS -- how general are the conclusions / findings in this paper (e.g. in Table 3 and 5), can they apply to more recent VLMs, since both the ALBEF and BLIP are smaller sized models from before 2022. Could you extend the method to more recent models such as LLaVA, Phi3.5, etc so that it is more likely to be a general conclusion?\", \"questions\": \"Happy to raise my score if weakness is addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explores using synthetic data to improve visual grounding in vision-and-language models. The authors present SynGround, a pipeline that generates synthetic image-text-box triplets by combining advances in text-to-image generation, language models, and object detection. They compare synthetic data with real and web-crawled data on RefCOCO+ and Flickr30k benchmarks. Results show SynGround enhances localization in ALBEF and BLIP models, outperforming web-crawled data and offering potential for infinite data generation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Systematic exploration: The paper systematically explores different strategies for generating synthetic image-text and image-text-box data, providing valuable insights into the factors influencing performance. The paper compares the performance of models trained on synthetic data with models trained on real and web-crawled data.\\n\\n2. Pipeline for synthetic data generation: The proposed SynGround pipeline offers a structured approach for creating synthetic data for visual grounding, combining several advanced techniques.\\n\\n3. Outperforming web-crawled data: The finding that synthetic data outperforms web-crawled data is a notable strength, suggesting the potential for creating more tailored and effective training datasets.\", \"weaknesses\": \"1. Use of older models: The paper relies on ALBEF and BLIP, which are relatively older models in the rapidly evolving field of vision and language. The performance in Experiment 1 does not compare to any of the models in the papersincode leaderboard (e.g., https://paperswithcode.com/sota/referring-expression-comprehension-on-refcoco-1). Evaluating SynGround with more recent and state-of-the-art models would significantly strengthen the claims.\\n\\n2. Limited performance gains: While improvements are reported, the absolute gains from using synthetic data, especially when combined with real data, are relatively modest and may not be statistically significant. Error bars or further statistical analysis should be provided to support the claims of improvement.\\n\\n3. Clarity and organization: The presentation of experiments could be improved. The motivation and reasoning behind each experiment could be more clearly articulated. Consolidating related experiments (like the BLIP experiments) into fewer tables would enhance readability. The paper would benefit from focusing on the key findings, such as the comparison with web-crawled data, earlier in the presentation.\\n\\n4. Lack of analysis on scaling limitations: While the paper mentions the potential for infinite data generation, it does not discuss or analyze potential limitations or saturation points in scaling up the use of synthetic data.\", \"questions\": \"Have you considered evaluating SynGround with more recent and state-of-the-art visual grounding models?\\n\\nCould you elaborate on the computational resources required for generating and utilizing the synthetic data, especially in the context of scaling up to larger datasets?\\n\\nHave you observed any limitations or saturation points when increasing the scale of synthetic data used for training?\\n\\nCould you discuss the potential impact of biases present in the source data (e.g., caption descriptions) on the generated synthetic data and downstream visual grounding performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a pipeline that uses LLMs, object detector, and image generation model to improve the grounding ability of VLMs. They demonstrate that applying such a pipeline allows them to improve the performance of a baseline ALBEF model on grounding tasks (RefCOCO, Flickr30K).\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The ablations in the paper are rather comprehensive and highlight the importance of each part of the pipeline.\", \"The paper demonstrates that training on the synthetically generated data improves over the baseline results on grounding tasks.\"], \"weaknesses\": [\"The baselines are weak: ALBEF is an older model that is far from SOTA on the benchmarks reported. What about applying the method to a more recent model (such as OFA [1])? One concern is that ALBEF is a much smaller model (BERT based LM), while the pipeline used to generate synthetic data leverage larger and more capable models such as LLaVA. I would be more convinced if the authors can apply their approach to improve a similar sized model. This would also alleviate concerns that this method is simply distilling a stronger ALBEF model from LLaVA generated data.\", \"The gains over the baselines are not really substantial enough at the moment to warrant running this (rather convoluted) synthetic generation pipeline. Even with the synthetic data, the relative improvements are worse than using real data (Table 2) and only marginally better when combined with real data (Table 3). In Figure 5, the improvement with introducing synthetic data also seems marginal, and within the error bounds of using less data (which does not bode well for scaling).\", \"The paper is rather difficult to read, and I found it structured in quite a confusing way. Figure 1 could be replaced with an overview of the full SynGround pipeline, including captioning, bounding box generation, image generation components, as well as the training objectives detailed in Sec 3.1.\", \"**References**\", \"[1] Wang, Peng, et al. \\\"Ofa: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework.\\\" International conference on machine learning. PMLR, 2022.\"], \"questions\": \"- There are several versions of Llava (1, 1.5, 1.6) as well as different model sizes (7B, 13B, 34B). Which one is being used in this paper?\\n- Another common semi-synthetic pipeline is to re-caption images (e.g., see Sec 7.1.1 of [2]). How would such a recaptioning approach fare on the CC experiments in Sec. 3.9?\\n\\n**References**\\n\\n[2] Dubey, Abhimanyu, et al. \\\"The llama 3 herd of models.\\\" arXiv preprint arXiv:2407.21783 (2024).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
EukM0UuqLx
Token-level Correlation-guided Compression for Efficient Multimodal Document Understanding
[ "Renshan Zhang", "Yibo Lyu", "Rui Shao", "Gongwei Chen", "Weili Guan", "Liqiang Nie" ]
Cropping high-resolution document images into multiple sub-images is the most widely used approach for current Multimodal Large Language Models (MLLMs) to do document understanding. Most of current document understanding methods preserve all tokens within sub-images and treat them equally. This neglects their different informativeness and leads to a significant increase in the number of image tokens. To perform a more adaptive and efficient document understanding, we propose Token-level Correlation-guided Compression, a parameter-free and plug-and-play methodology to optimize token processing. Firstly, we propose an innovative approach for assessing the pattern repetitiveness based on the correlation between each patch tokens. This method identifies redundant tokens, allowing for the determination of the sub-image's information density. Secondly, we present a token-level sampling method that efficiently captures the most informative tokens by delving into the correlation between the \texttt{[CLS]} token and patch tokens. By integrating these strategies, we develop a plug-and-play Token-level Correlation-guided Compressor module that can be seamlessly incorporated into MLLMs utilizing cropping techniques. This module not only enhances the processing speed during training and inference but also maintains comparable performance. We conduct experiments with the representative document understanding model mPLUG-DocOwl1.5 and the effectiveness is demonstrated through extensive comparisons with other compression methods.
[ "Multimodal Large Models", "Token Compression", "High-resolution Image" ]
https://openreview.net/pdf?id=EukM0UuqLx
https://openreview.net/forum?id=EukM0UuqLx
ICLR.cc/2025/Conference
2025
{ "note_id": [ "gT9X5q55t4", "Z3lRWkxOEo", "IhihjHr4BS", "5rcxLzZNP8", "0Urlm0Emgo" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730641611664, 1729991336143, 1731490539151, 1730780221093, 1730466218775 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6158/Reviewer_oT3k" ], [ "ICLR.cc/2025/Conference/Submission6158/Reviewer_2zFK" ], [ "ICLR.cc/2025/Conference/Submission6158/Authors" ], [ "ICLR.cc/2025/Conference/Submission6158/Reviewer_6yhF" ], [ "ICLR.cc/2025/Conference/Submission6158/Reviewer_LVBq" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a Token-level Correlation-guided Compression (TCC) method for dynamic visual token compression, aiming to improve the efficiency of Multimodal Large Language Models (MLLMs) in document understanding tasks. The method compute the information density through patch-patch correlations, leveraging token correlations to guide the compression process in document understanding tasks, aiming to reduce the number of visual tokens while maintaining model performance. Experiments with mPLUG-DocOWL1.5 shows the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper introduces the idea of information density based on patch-patch correlations, defining the information density of sub-images as the proportion of non-redundant tokens. This is then used to determine the token compression ratios adaptively. The proposed idea has some technical values.\", \"weaknesses\": \"1. The innovation of the proposed method is limited and not strong. Analyzing correlations or importance between visual tokens for compression of vision tokens has been explored in many recent researches. The simple idea of analyzing correlations between image patches and using [CLS] token correlations with patch tokens to sample the most informative tokens is straightforward and doesn't bring many new insights to the field.\\n\\n2. In Table 1, the paper only compares with two token compression methods, which is not sufficiently convincing. There are several recent dynamic visual token compression methods, but the authors did not compare with these methods, such as FastV and more: \\n (1) Chen, L. et al., An image is worth 1/2 tokens after layer 2: Plug-and-play inference acceleration for large vision-language models. arXiv preprint arXiv:2403.06764. \\n (2) Lin, Z.; Lin, M.; Lin, L.; and Ji, R. 2024. Boosting Multimodal Large Language Models with Visual Tokens Withdrawal for Rapid Inference arXiv preprint arXiv:2405.05803.\\n (3). Zhang J. et al., Dockylin: A large multimodal model for visual document understanding with efficient visual slimming, arXiv preprint arXiv:2406.19101 \\n ...\\n\\n3. While the authors claim their method is plug-and-play and can be seamlessly integrated into existing MLLMs using the proposed techniques, they only validated it on mPLUG-DocOWL1.5. The lack of experiments on other document understanding multimodal models (especially some recent SoTA models) makes it unclear whether the method is universally effective, or just works well for mPLUG-DocOWL1.5. The experiments are not solid and convincing enough.\\n \\n4. The compression effect is limited, achieving only 11.5% compression on mPLUG-DocOWL1.5. This compression ratio is not impressive compared to other recent multimodal model token compression methods.\", \"questions\": \"-- In Section A.2.3, the paper describes the results of Local Information Mining when selecting different layers. How does the performance of Global Information Mining vary with the selection of different layers?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper focuses on document understanding. Most current methods ignore the different informativeness of the tokens in the sub-images, leading to a significant increase in the number of image tokens. Hence, the authors propose a plug-and-play Token-level Correlation-guided Compression module, which evaluates the repeatability and informativeness based on the correlation between image tokens. Experiments demonstrate the effectiveness of this module.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Topics are current and important, focusing on compressing redundant information to improve model inference efficiency. Experimental results show that the proposed method outperforms existing solutions in terms of token compression efficiency and performance retention.\", \"weaknesses\": \"1) The role of the prompt is overlooked. Given that token compression is based entirely on images, it remains unclear how to ensure that the information required for the prompt is retained. It would be better to compare CLS patch-based and prompt-based approaches to token compression guidance.\\n2) The effectiveness of the plug-and-play approach requires further validation. It would be beneficial to demonstrate results on more baselines.\\n3) The analysis of experiments needs to be enriched. For example, why the model fine-tuned on TextVQA, TextCaps and VisualMRC datasets is not as good as plug-and-play instead.\\n4) There is an absence of direct comparisons with other token compression methods. It would be better to evaluate the performance and efficiency of the proposed method alongside existing token compression methods under consistent baselines.\\n5) The effectiveness of the Global Info Token requires additional validation. As shown in Figure 9, the Global Info Token is not consistently positioned in information-rich areas. Expanding the results presented in Table 4 to include more datasets could provide a more comprehensive assessment.\", \"questions\": \"Compared to PruMerge+, what are the advantages of the token compression method proposed in this paper?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper proposes multiple heuristics to improve the speed of Multi-Modal Large Language Models for document understanding. This is achieved by discarding uninformative tokens before feeding into the MLLM. For this purpose, the paper also comes up with a heuristic definition of \\\"information density\\\". The proposed method speeds up one of the state of the art models (DocOW11.6) by 13% in average (in exchange of minor performance degradation).\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"Two simple and intuitive heuristics providing non-trivial speed improvements for MLLM models.\", \"Multiple ablations are performed to justify the design decisions\"], \"weaknesses\": [\"The paper proposes \\\"incremental\\\" (13%) efficiency improvements for the state of the art MLLM document understanding models in exchange for non-zero model accuracy regressions.\", \"The method requires fine-tuning of target MLLM model for the best results. Otherwise, the trade-off between model accuracy loss and inference speed improvement is not justifiable.\", \"Theoretical justification for the proposed \\\"information density\\\" method is absent.\", \"Limitations of the proposed method needs to be discussed in depth.\"], \"questions\": \"Q1: Figure 8 and 9 suggests that the proposed method is primarily good at identifying the background pixels as uninformative. Would document background detection perform better as a baseline to detect and remove uninformative patches before feeding into MLLM?\", \"q2\": \"Can the proposed method be applied beyond document understanding? Is there an overarching theme?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper focus on the token compression for Multimodal Large Language Models. The authors propose Token-level Correlation-guided Compression. By calculating the information density of sub-images and efficiently capturing the most informative tokens through the correlation between the [CLS] token and patch tokens, the authors have developed a plug-and-play compression module to remove redundant tokens produced by existing image cropping methods, accelerating the model's training and inference speed.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The Token-level Correlation-guided Compression filters out the truly informative parts from the visual tokens through patch-patch correlation-guided information density calculation and cls-patch correlation-guided informative token sampling, accelerating the model's training and inference speed.\", \"weaknesses\": \"1.Although the authors' method compresses visual tokens and improves the speed of model training and inference, it results in a noticeable performance drop, and the speed improvement is not outstanding.\\n\\n2.The author claims that the proposed method is plug-and-play, but they only conducted experiments on the DocOwl-1.5 model, which is not very convincing. The proposed method needs to be validated on more models, especially on some higher-performance models such as InternVL2 and QWEN2VL.\\n\\n3.Lacking comparison with some of the latest token compression methods, such as FastV, which has been accepted by ECCV.\\n\\nChen L, Zhao H, Liu T, et al. An image is worth 1/2 tokens after layer 2: Plug-and-play inference acceleration for large vision-language models[J]. arXiv preprint arXiv:2403.06764, 2024.\\n\\n4.Lacking results on the commonly used benchmark OCRBench.\\n\\nLiu Y, Li Z, Yang B, et al. On the hidden mystery of ocr in large multimodal models[J]. arXiv preprint arXiv:2305.07895, 2023.\\n\\n5.In Line 139, the authors claim that \\\"Despite their robust capabilities for document understanding, these models remain significantly inefficient.\\\" However, in the experimental tables, the author does not compare the efficiency with other methods. It would be better to include a comparison of efficiency with other methods in Table 1.\", \"questions\": \"Does compressing tokens also offer advantages in terms of memory consumption? If so, the author should present this information. The proposed method does not seem to be designed specifically for documents, so why were experiments only conducted on document understaning benchmarks? Why not conduct experiments on more general multimodal understanding as well?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
EukID7GvBy
Gradual Learning: Optimizing Fine-Tuning with Partially Mastered Knowledge in Large Language Models
[ "Bozhou Li", "Hao Liang", "Yang Li", "Fangcheng Fu", "Hongzhi Yin", "Conghui He", "Wentao Zhang" ]
During the pretraining phase, large language models (LLMs) acquire vast amounts of knowledge from extensive text corpora. Nevertheless, in later stages such as fine-tuning and inference, the model may encounter knowledge not covered in the initial training, which can lead to hallucinations and degraded performance. This issue has a profound impact on the model's capabilities, as it will inevitably face out-of-scope knowledge after pretraining. Furthermore, fine-tuning is often required to adapt LLMs to domain-specific tasks, necessitating the acquisition of new knowledge. However, this phenomenon limits the model’s ability to learn and integrate new information during fine-tuning. The effectiveness of fine-tuning largely depends on the type of knowledge involved. Existing research suggests that fine-tuning the model on partially mastered knowledge—for instance, question-answer pairs where the model has a chance of providing correct responses under non-greedy decoding—can enable the model to acquire new knowledge while mitigating the forgetting of previously learned information. Notably, this approach can still lead to the forgetting of fully mastered knowledge, constraining the fine-tuning dataset to a narrower range and limiting the model's overall potential for improvement. Given the model’s intrinsic reasoning abilities and the interconnectedness of different knowledge areas, it is likely that as the model’s capacity to utilize existing knowledge improves during fine-tuning, previously unmastered knowledge may become more understandable. To explore this hypothesis, we conducted experiments and, based on the results, proposed a two-stage fine-tuning strategy. This approach not only improves the model's overall test accuracy and knowledge retention but also preserves its accuracy on previously mastered content. When fine-tuning on the WikiQA dataset, our method increases the amount of knowledge acquired by the model in this stage by 24%.
[ "Large Language Model; DCAI ; Fine-tuning" ]
https://openreview.net/pdf?id=EukID7GvBy
https://openreview.net/forum?id=EukID7GvBy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "aFTbD2daNC", "VMM5vxWM76", "U6P23pYRqd", "TQLtW39nbB", "PyJMufANa8", "Lhd9D7rTV1" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730275278007, 1730843258726, 1733737463262, 1729279702094, 1730649771196, 1730819871431 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9552/Reviewer_9bFL" ], [ "ICLR.cc/2025/Conference/Submission9552/Reviewer_W8Wn" ], [ "ICLR.cc/2025/Conference/Submission9552/Authors" ], [ "ICLR.cc/2025/Conference/Submission9552/Reviewer_n8zH" ], [ "ICLR.cc/2025/Conference/Submission9552/Reviewer_KNQC" ], [ "ICLR.cc/2025/Conference/Submission9552/Reviewer_CPVv" ] ], "structured_content_str": [ "{\"summary\": \"The paper investigates the challenges of finetuning in language models, noting that introducing new knowledge can lead to hallucinations and degraded performance. The authors hypothesize that reinforcing partially mastered knowledge could improve the model\\u2019s performance with new knowledge. They use a two-phase approach: (1) Knowledge detection (types) and (2) Two-stage finetuning with 'MaybeKnown' and 'HighlyKnown' Knowledge Replay. Primarily using Qwen-2, they analyze the contributions of each finetuning stage.\", \"writing_and_clarity\": [\"The paper suffers from unclear writing, especially in the abstract and motivation sections.\", \"Poor organization and coherence in presenting experiment motivation, methods, and results, impacting the overall flow.\", \"Findings are not clear and are not connected to the observations enough.\"], \"reliance_on_previous_contribution\": [\"Misinterpret \\u2018MaybeKnown\\u2019 recommendation (line 374) for improving performance on less known knowledge - Gekhman et al 2024 claimed that the \\u2018MaybeKnown\\u2019 helps to less hallucinate data the model already knows.\"], \"explanation_and_motivation\": \"\", \"lacks_clear_motivation_for_choosing_specific_strategies\": [\"No justification for preferring a replay-based approach for catastrophic forgetting (line 154).\", \"Insufficient explanation for focusing on the \\u2018MaybeKnown\\u2019 category over \\u2018HighlyKnown\\u2019 for finetuning.\", \"Weak or missing rationale for strategies 1-5 (lines 403-416).\", \"Justification is absent for selecting the first answer in multi-answer QA (lines 245-246).\", \"Foundational intuition (Section 3.1) for acquiring new knowledge from partial knowledge is weak and lacks references.\"], \"weak_results\": [\"The proposed method shows modest improvements (~4-7%) in accuracy (Table 7) which is not strongly associated with a specific factor.\", \"Results lack consistency: replay strategies sometimes decrease performance (e.g., Table 9, Strategies 4 and 5 for WeaklyKnown).\", \"Small performance changes in Table 8 might result from various, unexamined factors.\", \"No analysis of performance differences between models (Qwen and LLaMA).\", \"Not robust - used only one dataset (closed).\"], \"results_and_clarity\": [\"Results are presented in raw counts (Tables 3, 4, 5, 10), without proportions, making the transition and interpretation difficult.\", \"Graph construction (line 338) is unclear and does not clearly support the hypothesis; the contribution of node reclassification is ambiguous.\"], \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"Proposes Interesting intuition: integrating partially-known knowledge can boost performance on new known knowledge\", \"Presents a clear, structured methodology in Fig. 1\"], \"weaknesses\": \"The paper's contribution appears limited, as the concept of using \\u2018MaybeKnown\\u2019 knowledge to enhance test-set performance has already been established by Gekhman (2024). The newly proposed two-stage finetuning with a replay method results in only minor improvements (~1-2%), which, coupled with weak, non-robust analysis and disorganized writing, supports a recommendation for rejection.\", \"questions\": \"Formatting:\\n\\nMissing spaces and inconsistent citation format throughout the paper (e.g., Intro lines 40, 45, 51).\", \"terminology\": \"\", \"define_terms_before_using_them\": \"\\u201cKG\\u201d is used without explanation (line 67).\\nKnowledge types (line 75).\\nPrompt template (line 247).\\nThe second stage of finetuning in the Introduction is unclear and would benefit from clarification.\", \"methodology\": [\"Method Section 3.2: clarify data categories used in the finetuning steps (lines 203-206).\", \"Confirm if knowledge re-detection after each finetuning stage is performed on the training set\\u2014currently unclear.\", \"Definitions and Supplementary Information:\", \"Table 1 is identical to the one in Gekhman\\u2019s work (Figure 2(a)), and presented in the main paper which might be mistakenly interpreted as\", \"some contribution.\", \"Define \\u201cOrigin\\u201d in Table 7. Does it represent baseline performance without finetuning on \\u2018MaybeKnown\\u2019?\", \"Consider adding a supplementary section, e.g., for explaining prompt template creation (line 247).\"], \"results_interpretation\": \"- Table 8: Strategy 1 and Strategy 4 have identical scores\\u2014does this imply an effect specific to \\u2018HighlyKnown\\u2019 knowledge?\\nDivide the main results from analyses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a two stage finetuning strategy which supposedly improves the models knowledge retention capacity. Using the taxonomies introduced by [1], the paper conducts multi-stage finetuning ablations on the different sequence of subsets.\\n\\n[1] https://arxiv.org/pdf/2405.05904\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"Conducts several ablations on sequential multi-stage finetuning on easy to hard dataset subsets.\"], \"weaknesses\": [\"The contributions of the paper is minimal, given they performed ablations on data taxonomies introduced by [1]. At best, the work is an extension of the ablations performed by [1] themselves (see Section 5 of the paper).\", \"Only one downstream dataset (WikiQA) is considered, which limits the broader applicability of the approach.\", \"The paper is very poorly written, with large swaths of the text borderline unintelligible. Specially sections 3.1, 3.2, 4.1.2 need a lot of work to make the flow understadable.\", \"Numerous evidence on poor quality of writing / organization: Table on page 5 is redundant, statistical analysis on changes from one subset to the next is not highlighted well in Tables 2,3,4,12. It is highly unclear what I'm looking at - the change should be presented _relatively_, not as absolute numbers which is meaningless.\", \"[1] https://arxiv.org/pdf/2405.05904\"], \"questions\": [\"In writing, please add a space between all citations - it becomes hard to read the full sentences!\", \"What is the overall recommendation in terms of finetuning for multiple stages? It is not clear from the text.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper first designed an experiment to validate their hypothesis that as the model\\u2019s capacity to utilize existing knowledge improves during fine-tuning, previously unmastered knowledge may become more understandable. Then they propose a two-stage fine-tuning method to address this.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"knowledge probing/ augmentation is an important research problem for large language models.\"], \"weaknesses\": [\"The presentation in this paper is very poor. Actually, it's very hard for me to follow them due to the poor logic in the abstract and introduction. Also, figures and tables are not well-designed.\", \"Many format problems. There is no space between text and citation; cited and citep are not properly used in this paper; Some citations are even raw text (line 242)\", \"The proposed method and findings are trivial. And the experiment settings that tune the model in the test set are also problematic to me.\"], \"questions\": \"I suggest the authors submit this paper maybe tot a workshop.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper tackles the challenges that LLMs primarily acquire knowledge during pretraining and often struggle to integrate new information effectively in later stages such as fine-tuning and inference times. This can lead to issues like hallucinations and decreased performance (catastrophic forgetting).\\nThe core argument of this paper is that fine-tuning LLMs on partially mastered knowledge can help them leverage their existing knowledge base and reasoning capabilities to comprehend new concepts. The authors propose a two-stage fine-tuning to test this hypothesis.\", \"stage_1\": \"The model is fine-tuned on data representing knowledge it partially understands, meaning it has some chance of answering related questions correctly without fine-tuning.\", \"stage_2\": \"The training data is augmented with data that demonstrates improved mastery after the first stage, including knowledge initially classified as less well-understood. This augmented dataset is then used for a second round of fine-tuning.\\nThe results on the WikiQA dataset show that the two-stage fine-tuning method leads to improved test accuracy, enhanced knowledge mastery and mitigation of forgetting.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The motivation of this paper is well articulated, and it is important indeed such as how to avoid hallucination and catastrophic forgetting when LLMs are fine-tuned after pre-training.\", \"weaknesses\": \"1. Limited generalizability due to focus on a single closed-book question answering: The experiments rely heavily on the WikiQA dataset, which is specifically designed for closed-book question answering. This focus raises concerns about the generalizability of the results to other knowledge domains and tasks.\\n\\n2. Tiny Incremental Improvements: While the two-stage fine-tuning method shows improvements in test accuracy and knowledge mastery compared to one-stage fine-tuning, these improvements are relatively small. For example, Table 7 in the source shows that for the Qwen2 model, the accuracy improvement from one-stage to two-stage is smaller than 1%. This raises the question of whether the added complexity and computational cost of the second stage are justified by such marginal gains.\\n\\n3. Practicality of the method: Given the need for pre-classification, the practicality of the two-stage method for tasks that are not already determined is debatable.\", \"questions\": [\"The compiled \\\\cite looks broken throughout the paper.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper attempts to address the knowledge forgetting problem in the scenario of continual fine-tuning, where we might lost some information/performance if we do continual fine-tuning.\", \"they_perform_2_stage_fine_tuning\": \"1. Categorize the data into certain categories to understand how the model knows about the previous knowledge.\\n2. The second stage with mix up with some replay data to continue fine-tuning.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Efforts in designing strategies for how we fine-tuning with the replay data and the training data. We can see that, some unknown data change to MaybeKnown, justifying the hypothesis.\\n2. Retain the model original performance, and also comparable performance with SFT over new data (1-stage fine-tuning).\", \"weaknesses\": \"1. Paper is not well written, especially the citation position.\\n2. It\\u2019s not practical to assume we have the original replay data. For example, if we use a pre-trained model (Qwen), they will not release the SFT data for the community. if we want to improve the model performance on QA using our new data, we probably don\\u2019t have original SFT data for replay. The assumption seems not practical.\", \"questions\": \"Maybe the author can help answer the second weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
EtJWnTnqku
Medical Vision Generalist: Unifying Medical Imaging Tasks in Context
[ "Sucheng Ren", "Xiaoke Huang", "Xianhang Li", "Junfei Xiao", "Jieru Mei", "Zeyu Wang", "Alan Yuille", "Yuyin Zhou" ]
This study presents Medical Vision Generalist (MVG), the first foundation model capable of handling various medical imaging tasks---such as cross-modal synthesis, image segmentation, denoising, and inpainting---within a unified image-to-image generation framework. Specifically, MVG employs an in-context generation strategy that standardizes the handling of inputs and outputs as images. By treating these tasks as an image generation process conditioned on prompt image-label pairs and input images, this approach enables a flexible unification of various tasks, even those spanning different modalities and datasets. To capitalize on both local and global context, we design a hybrid method combining masked image modeling with autoregressive training for conditional image generation. This hybrid approach yields the most robust performance across all involved medical imaging tasks. To rigorously evaluate MVG's capabilities, we curated the first comprehensive generalist medical vision benchmark, comprising 13 datasets and spanning four imaging modalities (CT, MRI, X-ray, and micro-ultrasound). Our results consistently etablish MVG's superior performance, outperforming existing vision generalists, such as Painter and LVM. Furthermore, MVG exhibits strong scalability, with its performance demonstrably improving when trained on a more diverse set of tasks, and can be effectively adapted to unseen datasets with only minimal task-specific samples. The code and the benchmark will be publicly available.
[ "Medical Image Analysis", "Generalist Models" ]
https://openreview.net/pdf?id=EtJWnTnqku
https://openreview.net/forum?id=EtJWnTnqku
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zCAnVCHg36", "yICcTHlLHj", "uIyWubDYd7", "q1gNX0Wglg", "pseefOAAvk", "pnHhQ514ZJ", "pHnOZ2IERr", "mb1UXCF1jx", "iROcZgNlEf", "gmBrLf7fH7", "ekQwzXEkuE", "cVPe1KqJI6", "Z2qWms46C2", "W4gV4qq6NV", "RBAml6dagQ", "P5CM6sPTR4", "Lf5gGrGvMU", "KqGAKvtw6d", "KcMwHdmQPv", "KILJhyq4Nn", "H9SPPnPwo0", "4qqGzF3cQU", "21qRJYkBbp", "1oT9GJF2qp", "1D8nPyUTuK" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732695306445, 1733114780299, 1733081790266, 1732695278761, 1732949177349, 1732696096914, 1733252066675, 1730698513935, 1733124746273, 1732949136299, 1730236545965, 1732949157181, 1732695767922, 1733081965822, 1733081842397, 1732696141901, 1732695251436, 1732695977814, 1732949105777, 1730635305228, 1732695920973, 1730815970528, 1733081938678, 1732695883357, 1732642329361 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8515/Authors" ], [ "ICLR.cc/2025/Conference/Submission8515/Reviewer_UcLG" ], [ "ICLR.cc/2025/Conference/Submission8515/Authors" ], [ "ICLR.cc/2025/Conference/Submission8515/Authors" ], [ "ICLR.cc/2025/Conference/Submission8515/Authors" ], [ "ICLR.cc/2025/Conference/Submission8515/Authors" ], [ "ICLR.cc/2025/Conference/Submission8515/Authors" ], [ "ICLR.cc/2025/Conference/Submission8515/Reviewer_UcLG" ], [ "ICLR.cc/2025/Conference/Submission8515/Reviewer_crRU" ], [ "ICLR.cc/2025/Conference/Submission8515/Authors" ], [ "ICLR.cc/2025/Conference/Submission8515/Reviewer_vfk8" ], [ "ICLR.cc/2025/Conference/Submission8515/Authors" ], [ "ICLR.cc/2025/Conference/Submission8515/Authors" ], [ "ICLR.cc/2025/Conference/Submission8515/Authors" ], [ "ICLR.cc/2025/Conference/Submission8515/Authors" ], [ "ICLR.cc/2025/Conference/Submission8515/Authors" ], [ "ICLR.cc/2025/Conference/Submission8515/Authors" ], [ "ICLR.cc/2025/Conference/Submission8515/Authors" ], [ "ICLR.cc/2025/Conference/Submission8515/Authors" ], [ "ICLR.cc/2025/Conference/Submission8515/Reviewer_crRU" ], [ "ICLR.cc/2025/Conference/Submission8515/Authors" ], [ "ICLR.cc/2025/Conference/Submission8515/Reviewer_Q4Bs" ], [ "ICLR.cc/2025/Conference/Submission8515/Authors" ], [ "ICLR.cc/2025/Conference/Submission8515/Authors" ], [ "ICLR.cc/2025/Conference/Submission8515/Reviewer_Q4Bs" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal by authors -- Part \\u2162\", \"comment\": \"Q6: Table 1. It is important to note at the beginning of the article that in the benchmark the proportion of datasets for 4 tasks in non-equal, with the Segmentation task comprising the majority.\", \"a6\": \"We acknowledge that the number of segmentation datasets in the benchmark is higher compared to other tasks. However, the total size of the data used for low-level tasks (e.g., denoising and inpainting) is substantially larger. Specifically, the benchmark consists of 2,475,636 training image/label pairs, distributed as follows:\", \"segmentation\": \"212,811 pairs\", \"synthesis\": \"1,987,111 pairs\", \"inpainting\": \"139,311 pairs\", \"detection\": \"66,532 pairs\", \"denoising\": \"69,871 pairs\\nTo ensure a balanced contribution from high-level and low-level tasks, we adjusted the sampling weights. The segmentation task is assigned a sampling weight of 0.5, while all low-level tasks collectively share the remaining 0.5, to help maintain a more equitable representation of the tasks during training.\\n\\n[1] Xie, Z., Geng, Z., Hu, J., Zhang, Z., Hu, H., & Cao, Y. (2023). Revealing the dark secrets of masked image modeling. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 14475-14485).\"}", "{\"comment\": \"Many thanks to the authors for their detailed response to the concerns raised by the reviewer.\\n\\n**A1** \\nThe reviewer characterizes MVG as an expanded segmentation model not because it incorporates additional tasks into a universal segmentation model, but due to the lack of clarity in its motivation and the specific problem it aims to address. From a clinical perspective, the additional tasks, such as cross-modal synthesis and inpainting, have not yet demonstrated clear advantages in real-world clinical practice. This makes these tasks appear incremental relative to the core task of image segmentation. Moreover, the reported registration performance of 60% Dice (a marginal improvement over the initial alignment) suggests a fundamental misunderstanding of what constitutes a clinically useful tool. Tools intended for clinical applications require both robustness and significant improvements to justify their utility in practice, which is beyond what current form of MVG can provide.\\n\\n**A3** \\nWhile the authors fail to demonstrate the clinical effectiveness of MVG, as outlined in **A1**, it is also important to evaluate the contributions from a technical perspective. Technically, the study claims to demonstrate, for the first time, that heterogeneous tasks can be coherently trained together within a unified and scalable framework for medical imaging. However, the MVG primarily borrows methodologies from the computer vision community. While leveraging advances from other domains is valid, the lack of demonstrated clinical relevance, as discussed in **A1**, further undermines its overall impact. \\n\\nAdditionally, the authors\\u2019 statement in **A6**, that \\\"the limitations of masked image modeling (MIM) in segmentation tasks stem from its focus on recovering local details, which compromises global contextual information crucial for segmentation,\\\" suggests a misunderstanding of many state-of-the-art masked image modeling techniques. For instance, IJEPA [1] effectively uses masked image modeling in the latent space, addressing precisely the concerns raised by the authors. \\n\\nThe paper, in its current form, is far from being ready for publication at ICLR. The reviewer suggests that the authors focus on refining the manuscript to emphasize either advancing clinical translation or pushing the technical boundaries. A clearer direction in one of these areas would strengthen the paper\\u2019s contribution and relevance.\\n\\n[1] Yann LeCun et al. IJEPA: Implicit Joint Embedding Predictive Architectures. ICCV 2023.\"}", "{\"comment\": \"Dear Reviewer Q4Bs,\\n\\nWe would like to kindly remind you that the discussion deadline is approaching. After this deadline, we may not have the opportunity to respond.\\n\\nWe sincerely appreciate your time and thoughtful input and look forward to hearing your feedback.\\n\\nSincerely,\\n\\nAuthors of Submission 8515\"}", "{\"title\": \"Rebuttal by authors -- part \\u2161\", \"comment\": \"Q2(2): For example, as the authors mentioned, usual augmentation techniques for SSL, like image cropping, can not be applied to medical imaging. The cropping procedure might mislead the model training and plummet the performance. I wonder if the authors addressed that and conducted some experiments with (specific for medical domain) data augmentation?\\n\\nA2(2): Thank you for your insightful comment. Our framework is designed to handle various medical imaging tasks\\u2014such as cross-modal synthesis, image segmentation, denoising, and inpainting\\u2014within a unified image-to-image generation approach. While task-specific data augmentation techniques can vary significantly across different medical datasets, we aimed to prioritize simplicity and general applicability. Following Painter, we adopted random cropping as our data augmentation strategy and found it works effectively across different datasets.\\nHowever, we acknowledge that random cropping may not always be ideal for certain medical imaging scenarios. Addressing this limitation is beyond the scope of our current work, as data augmentation is not the primary focus of this paper. In future research, we intend to conduct a systematic analysis of augmentation techniques tailored to the medical domain. This will enable us to identify and formalize a set of universal data augmentation strategies that can further enhance the generalizability and performance of our framework across diverse medical imaging tasks.\\n\\nQ3 (1): The paper compares the authors model with generalist models, however the comparative analysis with specialist models such as U-Net and nnUNet could be more detailed. A deeper examination of the performance trade-offs between generalist and specialist models would help to position MVG\\u2019s contribution more clearly and to understand the limits of its applicability. \\n\\nA3(1): Our pipeline unlocks the possibility of integrating multiple medical vision tasks into a unified image-to-image generation framework. While generalist models may not yet outperform domain-specific models like U-Net or nnUNet in segmentation, they offer greater flexibility. MVG, unlike specialist models, can be directly trained across diverse tasks and adapted to new datasets with minimal task-specific samples, whereas models like U-Net and nnUNet are limited to segmentation and cannot handle other tasks.\\nTo position MVG\\u2019s contributions more clearly, we emphasize the following key points:\", \"vision_centric_generalist_model\": \"MVG represents the first vision-centric medical generalist model capable of unifying diverse tasks across multiple modalities. While existing medical generalist models, such as MedSAM and UniverSeg, focus exclusively on segmentation, MVG extends this paradigm by incorporating different types of tasks like synthesis and denoising. This study provides the first proof-of-concept for the feasibility of a generalist medical AI model capable of performing a range of medical imaging tasks within a unified framework.\", \"hybrid_training_strategy\": \"MVG employs a hybrid training strategy that combines masked image modeling with autoregressive training, enhancing its conditional image generation capabilities and demonstrating strong scalability across both data and task levels.\", \"comprehensive_benchmark_and_open_source_contribution\": \"We introduce the first comprehensive medical imaging benchmark encompassing a range of modalities, datasets, and anatomical regions. To facilitate further research and development, we will make all code and benchmarks publicly available to encourage future research on this new direction.\\n\\nQ3(2)Also, I suggest addressing the segmentation task in great detail; additional qualitative examples for inpainting and denoising would improve the clarity and show how MVG compares visually to specialists.\\n\\nA3(2): Thanks for the suggestion. We have provided qualitative examples of inpainting and denoising denoise (LowDose), inpainting (BrainLocal), and cross-modal synthesis (BrainGLI) in Figure 5 of our original manuscript. We will provide more qualitative examples in the next version.\", \"q4\": \"Line 146 and subsection 4.2 : \\u201c13 tasks\\u2026\\u201d. I suppose it is a miswriting, since there are 13 datasets, but only 4 tasks.\", \"a4\": \"Thank you for bringing this to our attention! Yes, this was a miswriting. There are 13 datasets and 4 tasks, not 13 tasks. We will revise the manuscript in the next version.\", \"q5\": \"Line 246 and Figure 3: The sub-numeration of blocks in the figure would make the reference to its blocks and understanding easier. Instead of referring to blocks as \\\"upper right\\\" or \\\"lower left\\\".\", \"line_248\": \"There is a reference to \\u201cFig 3(a)\\u201d, but, again, no numeration in the image\", \"a5\": \"We will update the references from \\\"(a)/(b)\\\" to \\u201cleft/right\\u201d\"}", "{\"comment\": \"Dear Reviewer vfk8,\\n\\nWe sincerely appreciate your review. We have carefully considered each of your questions and provided detailed responses in the rebuttal. Please let us know if you have any further questions or concerns.\\n\\nThanks!\"}", "{\"title\": \"Rebuttal by authors -- Part I\", \"comment\": \"Thanks for the appreciation of this work. We address your concerns below:\", \"q1\": \"The proposed framework is limited to 2D scenario.\", \"a1\": \"\\u200b\\u200bThank you for pointing out this important concern. Our current framework operates on 2D images, similar to approaches like UniverSeg. While we acknowledge the critical importance of 3D contextual information for accurately analyzing anatomical structures and the limitations of 2D analysis in capturing volumetric relationships, transitioning directly to 3D models presents significant computational challenges that require careful, non-trivial design. How to properly incorporate 2.5D or fully 3D models into MVG will be thoroughly investigated as future study. We will revise the manuscript to explicitly clarify this limitation and provide a roadmap for integrating 3D capabilities.\", \"q2\": \"clinical motivation. To me, since there are many pre-trained specialized models, medical researchers can just pick one of the state-of-the-art models and get better performance than the generalized models.\", \"a2\": \"For domains with well-established, large-scale pre-trained models, selecting one of the state-of-the-art specialized models for optimal performance is feasible. However, for domains lacking such pre-trained models, we argue that generalist models, trained on far more diverse datasets, hold a significant advantage in transfer learning.\\nGeneralist models offer unique benefits by consolidating diverse functionalities and insights within a unified framework. This integration enables them to address multiple tasks or domains within a single system, eliminating the need to develop and train numerous specialized models. This minimizes development time and provides a more cost-effective solution for researchers, particularly in resource-constrained settings.\\nFrom a performance perspective, while generalist models underperform compared to specialized models across benchmarks for both natural and medical images, their robust scalability is a promising indicator of future potential. As the availability of data and computational resources increases, the generalist approach may evolve to surpass specialized models in performance, offering a comprehensive solution that balances versatility and efficiency.\", \"q3\": \"The motivation of combining the masked imaging modeling and auto-aggressive is not clear. In the experiment, auto-aggressive training is more superior than masked imaging modeling. Then why combining them in the first place?\", \"a3\": \"Sorry for the confusion. While AR training outperforms MIM for image segmentation, this superiority does not extend to other tasks.\\nMIM\\u2019s suboptimal performance in segmentation, as shown in Table 7, arises from its masking strategy, which can disrupt the preservation of global contextual information crucial for delineating anatomical structures, such as the spatial relationships among abdominal organs. This aligns with findings from [1], which suggest that MIM is better suited for capturing local details but struggles with maintaining global context.\\nHowever, for tasks like inpainting and denoising, where refining local details takes precedence over preserving global context, MIM can be quite beneficial. To validate this, we compared MIM and AR training for cross-modal synthesis, inpainting, and denoising tasks using mean absolute distance as the evaluation metric, as shown in the Table below. MIM consistently outperformed AR training in these tasks while also requiring fewer computational resources.\\nBased on these insights and the results in Table 7, we adopted a hybrid training strategy:\\nFor segmentation tasks, AR training is used exclusively.\\nFor all other tasks, we allocate 90% of training iterations to MIM and 10% to AR training to leverage the strengths of both methods.\\n| Method | GPU Hours (RTX A5000) | Cross-modal Synthesis | Inpainting | Denoise | MIM |\\n|------------------|-----------------------|------------------------|------------|---------|-------|\\n| Mask Image Modeling (MIM) | 972h | 0.019 | 0.006 | 0.018 | |\\n| Autoregressive | 1062h | 0.020 | 0.006 | 0.019 | |\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper introduces Medical Vision Generalist (MVG), a foundational model designed to handle diverse medical imaging tasks within a unified framework. MVG standardizes input and output as images through an in-context generation approach, treating tasks as an image generation process conditioned on prompt image-label pairs. To leverage both local and global context, MVG combines masked image modeling with autoregressive training for conditional image generation. Experimental results indicate that MVG outperforms existing vision generalists, such as Painter and LVM, demonstrating scalability and adaptability across modalities and datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The authors insightfully recognize that a pure masking strategy is insufficient for medical image segmentation, leading them to incorporate an autoregressive training pipeline. Experimental results confirm the effectiveness of this approach.\\n\\n2. Unlike other foundational models focused primarily on segmentation, this paper addresses a broader range of tasks, including cross-modal synthesis, image denoising, and inpainting, opening potential new research directions.\", \"weaknesses\": \"1. While the addition of multiple tasks is beneficial, the paper overlooks essential medical imaging tasks, such as image registration and inverse reconstruction, making MVG appear more like an expanded segmentation model than a comprehensive foundation model. The reviewer suggests that MVG\\u2019s learned feature representations could potentially support image registration by integrating a flow estimation head, and inverse reconstruction by using denoising as a regularizer, unfolding the inverse optimization problem with forward consistency [1] within the network.\\n\\n2. Restricting training to 2D images raises concerns about MVG\\u2019s utility as a foundational model for medical imaging. Effective 3D analysis is crucial, as many anatomical structures (e.g., brain cortex, lung vessels, heart) span significant volumes, where 2D slices may miss critical contextual information.\\n\\n3. The authors claim that MVG \\u201cscales well with multiple tasks and datasets,\\u201d yet the evidence provided in Figure 6 and Table 6 only demonstrates that more data improves performance, a known property of deep learning models, and that unified training is preferable to isolated training, which reiterates established insights for vision transformers requiring large-scale data.\\n\\n4. The paper lacks runtime and complexity analysis, particularly GPU resources for training. Comparisons with resource-efficient models, such as nnU-Net trained on individual datasets, would offer a clearer picture. Specifically, what GPU/hours are required for training MVG on all datasets, and how does GPU memory usage compare to nnU-Net or other specialist models trained on individual datasets?\\n[1] MoDL: Model-Based Deep Learning Architecture for Inverse Problems, TMI 2019.\", \"questions\": \"1. Can the authors explain the discrepancy in ACDC Dice scores for UniverSeg between Table 2 in this paper (0.54) and Table 6 in the UniverSeg paper appendix (0.70)?\\n\\n2. Is the failure of mask modeling in segmentation tasks due to the use of L1 loss? Have alternative segmentation losses, such as Dice loss, been tested?\\n\\n3. Could the authors elaborate on the motivation and practical benefits of MVG in medical imaging? Clinically, how does a foundational model outperform specialist models in practice, such as in speed, ease of deployment, or cost-effectiveness? Technically, what key insights from MVG could guide future researchers in developing clinically and economically viable foundational models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the rebuttal\", \"comment\": \"Thanks for the detailed response by the authors\\n\\nThe comments have clarified the concerns about the colorization strategy(Q1) and scalability (Q3). \\n\\nRegarding Q2, the authors claimed that the primary focus is on demonstrating how the proposed pipeline enables the integration of multiple medical vision tasks. However, current experiments could not provide enough evidence since the comparisons (Table 6) are performed between training with unified and isolated datasets. It is recommended that the authors perform ablation study on vision tasks instead of datasets. Besides, the practical applicability of the proposed method is limited, given that its performance is lower than specialized models in almost all tasks.\\n\\nTherefore, the manuscript is below the acceptance threshold.\"}", "{\"comment\": \"Dear Reviewer UcLG,\\n\\nWe sincerely appreciate your review. We have carefully considered each of your questions and provided detailed responses in the rebuttal. Please let us know if you have any further questions or concerns.\\n\\nThanks!\"}", "{\"summary\": \"In this paper, the authors introduce a unified framework, named Medical Vision Generalist, for medical imaging analysis tasks, including segmentation, cross-modality synthesis, inpainting and denoising. The authors formulate the learning task as prompt-based learning task and prompt include task-image and task-label pairs. The authors adopt a single-channel colorization method to unify the output space of images across tasks. The authors combine the masked imaging modeling and auto-aggressive training method to train the model. To demonstrate the framework's capabilities, the authors curate a comprehensive generalist medical vision benchmark.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper is well written and easy to follow.\", \"The paper proposes a solution to address the generalization of medical imaging analysis, such as cross-domain problem.\", \"The paper proposes a unified colorization formulation to unify the different output types of medical imaging analysis tasks.\", \"The paper treats the different learning tasks as a prompt-based learning task.\", \"The paper introduces a new benchmark for generalist medical imaging analysis.\"], \"weaknesses\": [\"The proposed framework is limited to 2D scenario.\", \"The paper does not explain the clinical motivation why a generalized medical imaging analysis model is needed. To me, since there are many pre-trained specialized models, medical researchers can just pick one of the state-of-the-art models and get better performance than the generalized models.\", \"The motivation of combining the masked imaging modeling and auto-aggressive is not clear. In the experiment, auto-aggressive training is more superior than masked imaging modeling. Then why combining them in the first place?\", \"The generalization of proposed framework is not very convincing, since the organs in the unseen-dataset also appears in training set. What about other types of medical imaging dataset, such as pathology data?\"], \"questions\": [\"Could the authors explain the potential clinical benefits of your proposed framework in detail?\", \"Could the authors add more unseen-dataset evaluation to test the generalization of the proposed framework, for example cell data whose semantic is not available in the training data?\", \"Could the authors explain architecture of your ViT, such as the embedding size or number of layers or the total number of parameters, since the network requires 8 A5000 gpus to train?\", \"Could the authors explain the training time of auto-aggression modeling?\", \"Could the authors explain the inference resources needed of your model? Can it run on CPU under an acceptable time? How much time does it require to infer a given test image under a given test hardware?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer crRU,\\n\\nWe sincerely appreciate your review. We have carefully considered each of your questions and provided detailed responses in the rebuttal. Please let us know if you have any further questions or concerns.\\n\\nThanks!\"}", "{\"title\": \"Rebuttal by authors -- Part I\", \"comment\": \"Thanks for your helpful comments. We address your concerns below:\", \"q1\": \"More medical imaging tasks. While the addition of multiple tasks is beneficial, the paper overlooks essential medical imaging tasks, such as image registration and inverse reconstruction, making MVG appear more like an expanded segmentation model than a comprehensive foundation model. The reviewer suggests that MVG\\u2019s learned feature representations could potentially support image registration by integrating a flow estimation head and inverse reconstruction by using denoising as a regularizer, unfolding the inverse optimization problem with forward consistency [1] within the network.\", \"a1\": \"We respectfully disagree with the characterization of MVG as an expanded segmentation model. While segmentation datasets in the benchmark are more numerous compared to other tasks, the total size of data used for low-level tasks (e.g., denoising and inpainting) is substantially larger. Specifically, the benchmark includes 2,475,636 training image/label pairs, distributed as follows:\\n**Segmentation - 212,811 pairs**,\\n**Synthesis - 1,987,111 pairs**,\\n**Inpainting - 139,311 pairs**,\\n**Detection - 66,532 pairs**,\\n**Denoising - 69,871 pairs**.\\n\\nAs mentioned in line 340 of the original paper, the sampling weight of segmentation tasks is 0.5 while the rest of the tasks share 0.5. Therefore, all tasks are well balanced and MVG is not dominated by a certain task.\\nMVG\\u2019s key contribution lies in unifying diverse medical imaging tasks within a single image-to-image generation framework, rather than employing task-specific fine-tuning or additional task-specific heads. However, our framework can be indeed extended to other tasks with modifications. For instance, adding a flow estimation head enables image registration. Following [2], we trained and evaluated MVG on the Neurite-OASIS dataset [3], consisting of 350 training slices and 64 testing slices, achieving a Dice score of 0.602: \\n|Method | Registration | \\n|--------------|--------------|\\n| MVG | 0.602 | \\n\\nFor inverse reconstruction, the task requires 12-frame inputs, which would significantly increase computational demands and training costs\\u2014an addition that is non-trivial given the current computation constraints. This challenge is compounded by the already high training budget (see Q4 below), driven by the large input size (4\\u00d7 the original image size, see Figure 3) and the use of the ViT-Large architecture.\\nWe acknowledge the importance of these tasks and will discuss their applicability in the manuscript, referencing [1]. In future work, we will explore efficient approaches to extend MVG for multi-frame input tasks and other applications.\", \"q2\": \"Restricting training to 2D images raises concerns about MVG\\u2019s utility as a foundational model for medical imaging. Effective 3D analysis is crucial, as many anatomical structures (e.g., brain cortex, lung vessels, heart) span significant volumes, where 2D slices may miss critical contextual information.\", \"a2\": \"\\u200b\\u200bOur current framework operates on 2D images, similar to approaches like UniverSeg. While we acknowledge the critical importance of 3D contextual information for accurately analyzing anatomical structures and the limitations of 2D analysis in capturing volumetric relationships, transitioning directly to 3D models presents significant computational challenges that require careful, non-trivial design. How to properly incorporate 2.5D or fully 3D models into MVG will be thoroughly investigated as future study. We will revise the manuscript to explicitly acknowledge this limitation and outline a roadmap for integrating 3D capabilities into MVG.\"}", "{\"comment\": \"Dear Reviewer vfk8,\\n\\nWe would like to kindly remind you that the discussion deadline is approaching. After this deadline, we may not have the opportunity to respond.\\n\\nWe sincerely appreciate your time and thoughtful input and look forward to hearing your feedback.\\n\\nSincerely,\\n\\nAuthors of Submission 8515\"}", "{\"comment\": \"Dear Reviewer UcLG,\\n\\nWe would like to kindly remind you that the discussion deadline is approaching. After this deadline, we may not have the opportunity to respond.\\n\\nWe sincerely appreciate your time and thoughtful input and look forward to hearing your feedback.\\n\\nSincerely,\\n\\nAuthors of Submission 8515\"}", "{\"title\": \"Rebuttal by authors -- Part \\u2161\", \"comment\": \"Q4: What about other types of medical imaging datasets, such as pathology data?\", \"a4\": \"Pathology images, which are typically 3-channel RGB images, present significant differences from the radiology images (e.g., CT and MRI) used in our current training data due to domain and modality gaps. These gaps arise from the fact that pathology images are collected and scanned in entirely different environments. As a result, our MVG cannot be directly applied to pathology datasets in its current form without pretraining on such data.\\nWe plan to enhance the diversity of data types in the pretraining stage by incorporating pathology images and other imaging modalities in future work. This can further enhance the MVG\\u2019s generalization capabilities across a broader range of medical imaging datasets, improving its robustness and applicability.\", \"q5\": \"Could the authors explain the architecture of your ViT, since the network requires 8 A5000 gpus to train?\", \"a5\": \"We use ViT large with depth=24 and width=1024. To finish training on A5000, we use gradient checkpoint and accumulation to reduce memory consumption.\", \"q6\": \"training time.\", \"a6\": \"The total training time is 1,062 GPU (A5000) hours\", \"q7\": \"inference resources\", \"a7\": \"Our model is optimized for GPU deployment, taking full advantage of the parallel processing capabilities of modern GPUs to achieve high inference efficiency. On an RTX A5000 GPU, our model processes a single test image in approximately 0.3 seconds. While the model can run on a CPU, the inference speed is significantly slower (~3.7 seconds), making GPU utilization essential for maintaining acceptable processing times in demanding environments. To further improve performance for CPU deployment, we plan to explore advanced model compression techniques [2,3] and acceleration methods [2].\\n\\n\\n[1] Xie, Z., Geng, Z., Hu, J., Zhang, Z., Hu, H., & Cao, Y. (2023). Revealing the dark secrets of masked image modeling. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 14475-14485).\\n\\n[2] Choudhary, T., Mishra, V., Goswami, A., & Sarangapani, J. (2020). A comprehensive survey on model compression and acceleration. Artificial Intelligence Review, 53, 5113-5155.\\n\\n[3] Li, Z., Li, H., & Meng, L. (2023). Model compression for deep neural networks: A survey. Computers, 12(3), 60.\"}", "{\"title\": \"Rebuttal by authors -- Part I\", \"comment\": \"Sorry for posting our rebuttal late. We address your concerns below:\", \"q1\": \"As shown by the authors, the hybrid use of autoregressive training boosts performance. However, it may impose higher computational costs during inference. The authors could provide a more detailed analysis of the trade-offs between performance and inference efficiency, especially when using MVG with heavy medical files, like high-resolution MRI.\\n\\n\\n\\n\\n| Method | GPU hours (RTX A5000) | mIoU |\\n|--------------------|-----------------------|------|\\n| Mask Image Modeling| 972h | 0.53 |\\n| Autoregressive | 1062h | 0.79 |\", \"a1\": \"We appreciate the question and apologize for any confusion caused. It is important to clarify that autoregressive training impacts only the training budget and does not introduce any additional computational costs during the inference stage. The observed higher computational costs arise primarily from two factors: the significantly larger input dimensions (our framework uses inputs that are four times larger, including the prompt image/label and task image) during training and the use of a larger network (ViT-Large) compared to standard segmentation methods, such as UNet.\\nTo provide further insight into how autoregressive training and mask image modeling affect the training budget, we conducted an ablation study (refer to the table above). This demonstrates that mask image modeling contributes only a marginal reduction in computational cost, as masking is applied solely to the label images. Autoregressive training, in contrast, slightly increases the training cost while significantly boosting performance (mIoU). It is worth emphasizing that neither method increases the computational burden during inference.\\n\\nQ2(1):The ablation study lacks detailed insights into the contribution of individual components. For example, how do masked image modeling and autoregressive training individually affect the performance? \\n\\nA2(1): \\nThank you for raising this point. We conducted an ablation study to evaluate the individual contribution of masked image modeling (MIM) and autoregressive (AR) training. As detailed in Table 7 of the original manuscript, AR training significantly outperforms MIM (using the optimal mask ratio of 75%) for segmentation tasks across all nine datasets, achieving an average improvement of 0.27 IoU across various organs and targets.\\nAs shown in line in Lines 254-263 in the original manuscript, we hypothesize that MIM\\u2019s suboptimal performance in segmentation tasks stems from its masking strategy, which may compromise the preservation of global contextual information critical for delineating anatomical structures, such as the spatial relationships among abdominal organs. This observation aligns with findings from [1], which indicate MIM excels at capturing local details but struggles with global context preservation. We also visualized attention maps and calculated attention distances. Our results, to be included in the revised manuscript, show that MIM-trained models exhibit smaller attention distances, indicating a focus on local regions, whereas autoregressive training captures more global context by producing larger attention distances in key attention heads.\\nConversely, MIM is beneficial for tasks like inpainting and denoising, where refining local details may take precedence over maintaining global context. To further validate this, we compared MIM and AR training for cross-modal synthesis, inpainting, and denoising tasks using mean absolute distance as the evaluation metric, as shown in the Table below. MIM consistently outperformed AR training in these tasks while also requiring less computational cost. Based on these insights and the results in Table 7, we adopted a hybrid training strategy:\\nFor segmentation tasks, we use AR training exclusively.\\nFor all other tasks, we allocate 90% of training iterations to MIM and 10% to AR training.\\n| Method | GPU hours (RTX A5000) | Cross-modal synthesis | Inpainting | Denoise |\\n|------------------|-----------------------|------------------------|------------|---------|\\n| MIM|972h|0.019|0.006|0.018| \\n| Autoregressive | 1062h | 0.020 | 0.006 | 0.019 |\"}", "{\"title\": \"Rebuttal by authors\", \"comment\": \"Thanks for your comments. We address your concerns below:\", \"q1\": \"compare the segmentation performance while isolating the effects of the colorization strategy from task unification.\", \"a1\": \"We clarify that colorization is key to task unification in our method, as it inherently unifies all tasks. In Table 5, we conduct an ablation study to isolate the effects of different colorization strategies while maintaining task unification, allowing us to evaluate colorization's impact independently. Specifically, MVG attains average IoU scores of 0.55, 0.71, and 0.79 across nine segmentation datasets using binary, pre-defined, and random colorization, respectively.\", \"q2\": \"Regarding cross-modal synthesis\\u2026 the improvement by the MVG model is marginal compared to the previous generalist model, and all generalist models perform worse than specialist models in each task, according to Table 3.\", \"a2\": \"The improvement achieved by MVG is not marginal. As shown in Table 3, our method consistently outperforms in all three low-level tasks (synthesis, inpainting, and denoising) across all three evaluation metrics (MAE, SSIM, and PSNR). In addition to the results presented in Figure 5, we will include further qualitative comparisons with existing generalist models to better highlight MVG's effectiveness.\\nOur primary focus is on demonstrating how our pipeline enables the integration of multiple medical vision tasks into a unified image-to-image generation framework, representing a novel vision-centric approach to medical generalist models. While generalist models currently underperform specialized models, they offer greater flexibility. Unlike specialized models like U-Net or nnUNet, which are limited to segmentation tasks, MVG can be trained across a wide range of tasks and easily adapted to new datasets with minimal task-specific data. \\nMoreover, MVG\\u2019s strong scalability\\u2014both in terms of data and task diversity\\u2014suggests that, with increased data and computational resources, our generalist approach has the potential to surpass specialized models. Unlike specialized models, which are limited to a single task and cannot benefit from task-level scalability, MVG's ability to handle a wide range of tasks positions it to scale more effectively as new tasks and data emerge. This inherent flexibility gives our generalist model advantages in scaling to meet the evolving demands of medical imaging.\", \"q3\": \"the experiments on the scalability were conducted on several small datasets, which could not demonstrate the potential to increase the performance when unifying all datasets.\", \"a3\": \"We first assemble all 12 training datasets into a large training dataset including Segmentation: 212,811 pairs, Synthesis: 1,987,111 pairs, Inpainting: 139,311 pairs, Detection: 66,532 pairs. All scalability experiments were conducted on this comprehensive dataset, which includes over 2 million pairs. Metrics for each sub-task were reported based on this unified dataset, demonstrating the framework's ability to scale effectively across diverse tasks. Additionally, we plan to expand these experiments to incorporate even more datasets in future work to further validate the scalability of our approach.\"}", "{\"comment\": \"Dear Reviewer Q4Bs,\\n\\nWe sincerely appreciate your review. We have carefully considered each of your questions and provided detailed responses in the rebuttal. Please let us know if you have any further questions or concerns.\\n\\nThanks!\"}", "{\"summary\": \"The paper introduces a medical vision generalist (MVG) model, which unifies various medical imaging tasks including segmentation, cross-modal synthesis, denoising, and inpainting within a single image-to-image generation framework. MVG utilizes in-context learning, treating tasks as image generation processes conditioned on prompt image-label pairs, allowing for flexible adaptation to different modalities and datasets. The authors have also curated a comprehensive benchmark with 13 datasets across four imaging modalities to evaluate MVG, which consistently outperforms existing generalist models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper first uses in-context learning to unify multiple medical vision tasks, which is original.\\nThe proposed output space unification strategy is useful when training with multiple segmentation tasks.\", \"weaknesses\": \"The advantage of unifying multiple medical vision tasks through the MVG model could not be verified based on the evidence provided in this paper.\\nAccording to Table 2 and Table 5, the performance gain in segmentation tasks could be due to the colorization strategy instead of unifying other vision tasks. \\nRegarding cross-modal synthesis, inpainting, and Denoising tasks, the improvement by the MVG model is marginal compared to the previous generalist model and all generalist models performs worse than specialist models in each task according to Table 3.\\nBesides, the experiments on the scalability were conducted on several small datasets, which could not demonstrate the potential to increase the performance when unifying all datasets.\", \"questions\": \"To better demonstrate the advantage of unifying multiple medical vision tasks through the MVG model, the authors could make the following efforts to address the concerns,\\n1. compare the segmentation performance while isolating the effects of the colorization strategy from task unification.\\n2. discuss potential reasons for the marginal improvements in these tasks and propose strategies to close the gap with specialist models\\n3. conduct scalability experiments on the unified datasets.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by authors -- Part \\u2162\", \"comment\": \"Q7: Clinical practice. Could the authors elaborate on the motivation and practical benefits of MVG in medical imaging?\", \"a7\": \"Generalist models typically require more computational resources because they need to handle a wide range of tasks simultaneously. Nevertheless, generalist models still have non-negligible advantages over specialized models. They provide versatility by handling multiple tasks or domains within a single framework, **reducing the need for separate models and minimizing development time**. Their ability to leverage **shared knowledge across tasks improves data efficiency and facilitates scalability as new tasks emerge**. Additionally, generalist models often excel in transfer learning, offering cost-effective solutions by integrating various functionalities and insights into one unified system.\\n\\nIn this research work, our primary focus is on demonstrating how our pipeline enables the integration of multiple medical vision tasks into a unified image-to-image generation framework, representing a novel vision-centric approach to medical generalist models. While generalist models currently underperform specialized models across various benchmarks\\u2014spanning both natural and medical imaging domains\\u2014they offer transformative potential by **unifying multiple modalities, tasks, and datasets into a single framework**. This new paradigm provides **far greater flexibility** compared to task-specific models\\u2014unlike specialized models which are limited to segmentation tasks, MVG can be directly trained across a wide range of tasks and easily adapted to new datasets with minimal task-specific data. \\n\\nMoreover, MVG\\u2019s strong **scalability\\u2014both in terms of data and task diversity**\\u2014suggests that, with increased data and computational resources, our generalist approach has the potential to surpass specialized models. Unlike specialized models, which are limited to a single task and cannot benefit from task-level scalability, MVG's ability to handle a wide range of tasks positions it to scale more effectively as new tasks and data emerge. In addition to scalability, this flexibility also promotes task generalization and efficient knowledge transfer, making MVG a forward-compatible solution to meet the evolving demands of medical imaging. With increased data and computational resources, our generalist approach may have the potential to surpass specialized models in the future to meet the evolving demands of medical imaging.\\n\\n[1] Aggarwal, H. K., Mani, M. P., & Jacob, M. (2018). MoDL: Model-based deep learning architecture for inverse problems. IEEE transactions on medical imaging, 38(2), 394-405.\\n\\n[2] Balakrishnan, G., Zhao, A., Sabuncu, M. R., Guttag, J., & Dalca, A. V. (2019). Voxelmorph: a learning framework for deformable medical image registration. IEEE transactions on medical imaging, 38(8), 1788-1800.\\n\\n[3] Hoopes, A., Hoffmann, M., Greve, D. N., Fischl, B., Guttag, J., & Dalca, A. V. (2022). Learning the effect of registration hyperparameters with hypermorph. The journal of machine learning for biomedical imaging, 1.\\n\\n[4] Xie, Z., Geng, Z., Hu, J., Zhang, Z., Hu, H., & Cao, Y. (2023). Revealing the dark secrets of masked image modeling. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 14475-14485).\\n\\n[5] Dosovitskiy, A. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.\\n\\n[6] Radford, A. (2018). Improving language understanding by generative pre-training.\\n\\n[7] Kenton, J. D. M. W. C., & Toutanova, L. K. (2019, June). Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of naacL-HLT (Vol. 1, p. 2).\\n\\n[8] He, K., Chen, X., Xie, S., Li, Y., Doll\\u00e1r, P., & Girshick, R. (2022). Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 16000-16009).\"}", "{\"summary\": \"The work is about a new foundation model for medical image analysis that aims to unify multiple imaging tasks (segmentation, cross-modal synthesis, inpainting, and denoising), called Medical Vision Generalist (MVG). The implementation is within a single model using a standardized image-to-image generation framework. MVG distinguishes itself by employing in-context learning strategies, which eliminate the need for retraining on new datasets and enable quick adaptation to unseen tasks with minimal labeled samples. Also authors introduce a benchmark that covers 13 datasets across 4 medical tasks for various imaging modalities (CT, MRI, X-ray, and micro-ultrasound). The latter makes this study not only about the methodology but also an important contribution to the research community.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"The comprehensiveness of imaging modalities and tasks. The paper addresses segmentation, cross-modal synthesis, inpainting, and denoising tasks across various modalities, like CT, MRI, X-ray, and micro-ultrasound.\\n\\nHigh performance and generalization ability is remarkable. MVG outperforms SOTA models such as Painter and LVM in most metrics. It also demonstrates scalability with different datasets and adaptability to unseen datasets with minimal samples.\\n\\nMethodology of training - hybrid approach with masked image modeling and autoregressive training is a novelty that provides strong performance across diverse tasks. Also this approach partly solves the problem with medical imaging data augmentation (like cropping operations).\", \"weaknesses\": \"As shown by authors, the hybrid use of autoregressive training boosts performance. However, it may impose higher computational costs during inference. The authors could provide a more detailed analysis of the trade-offs between performance and inference efficiency, especially when using MVG with heavy medical files, like the high resolution MRI.\\n\\nThe ablation study lacks detailed insights into the contribution of individual components. For example, how do masked image modeling and autoregressive training individually affect the performance? For example, as the authors mentioned, usual augmentation techniques for SSL, like image cropping, can not be applied to medical imaging. The cropping procedure might mislead the model training and plumet the performance. I wonder if the authors addressed that and conducted some experiments with (specific for medical domain) data augmentation?\\n\\nThe paper compares the authors model with generalist models, however the comparative analysis with specialist models such as U-Net and nnUNet could be more detailed. A deeper examination of the performance trade-offs between generalist and specialist models would help to position MVG\\u2019s contribution more clearly and to understand the limits of its applicability. Also, I suggest addressing not only the segmentation task in a great detail; additional qualitative examples for inpainting and denoising would improve the clarity and show how MVG compares visually to specialists.\", \"questions\": \"Line 146 and subsection 4.2 : \\u201c13 tasks\\u2026\\u201d.\\nI suppose it is a miswriting, since there are 13 datasets, but only 4 tasks.\", \"line_246_and_figure_3\": \"The sub-numeration of blocks in the figure would make the reference to its blocks and understanding easier. Instead of referring to blocks as \\\"upper right\\\" or \\\"lower left\\\".\", \"line_248\": \"There is a reference to \\u201cFig 3(a)\\u201d, but, again, no numeration in the image.\\n\\nTable 1. I think it is important to note in the beginning of the article, that in the benchmark the proportion of datasets for 4 task in non-equal, with Segmentation task comprising the majority.\", \"line_255\": \"\\u201cWe hypothesize that this may be attributed to the masking strategy...\\\" - Indeed, medical images data have fundamental and well known differences from the natural domain data; here, you could cite some papers that discuss this topic in more detail. For example (Huang et al., Self-supervised learning for medical image classification: a systematic review and implementation guidelines)\\n\\nLine 465 and Table 4. The difference between Liver and Spleen datasets and Lung dataset is almost two times. Can you elaborate on the causes of such a huge difference?\", \"line_470\": \"Data scalability section - there is no reference to Fig. 6.\\n\\nThe reliance on prompts to condition predictions introduces variability in outputs and performance, because different prompts may yield different results. Did authors address it somehow in their experiments? At the very least, how is the performance affected by the different prompts?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer crRU,\\n\\nWe would like to kindly remind you that the discussion deadline is approaching. After this deadline, we may not have the opportunity to respond.\\n\\nWe sincerely appreciate your time and thoughtful input and look forward to hearing your feedback.\\n\\nSincerely,\\n\\nAuthors of Submission 8515\"}", "{\"title\": \"Rebuttal by authors -- Part \\u2161\", \"comment\": \"Q3: The authors claim that MVG \\u201cscales well with multiple tasks and datasets,\\u201d yet the evidence provided in Figure 6 and Table 6 only demonstrates that more data improves performance, a known property of deep learning models and that unified training is preferable to isolated training, which reiterates established insights for vision transformers requiring large-scale data.\", \"a3\": \"Thank you for raising this important point. However, for deep networks, data scaling does not always work well. For example, CNNs face more significant scaling challenges compared to Transformers [5]. Moreover, even within Transformers, those trained using supervised learning exhibit poorer scaling capabilities than their self-supervised counterparts [6,7,8]. These findings emphasize the need to design effective algorithms with robust scaling properties and to rigorously evaluate their scaling performance, as demonstrated in our work.\\n\\nTo our knowledge, this is the first study to propose and implement a unified learning framework for training a generalist model across diverse medical vision tasks, spanning both high-level objectives (e.g., segmentation) and low-level ones (e.g., denoising, inpainting, and synthesis). Unlike prior work, which typically focuses on scalability within isolated tasks or domains (e.g., MedSAM and UniverSeg for segmentation), **our study demonstrates for the first time that heterogeneous tasks can be coherently trained together in a unified and scalable framework**.\\nMVG validates the framework\\u2019s adaptability to emerging datasets and tasks, providing a versatile solution for the evolving needs of medical imaging. 1) While the feasibility of a generalist medical AI that unifies multiple imaging tasks has remained unclear, our study offers the first proof-of-concept for a vision-centric generalist model capable of handling diverse tasks within a unified image-to-image generation framework. 2) We also show that MVG exhibits strong scalability, improving performance with more diverse training tasks and effectively adapting to new datasets with minimal task-specific samples. This highlights the potential of generalist models to address future medical imaging challenges.\\n3) To advance understanding, we provide an in-depth analysis of the roles of masked image modeling and autoregressive training in developing a generalist medical model, an aspect not explored in prior studies. Our analysis also includes ablation studies on objective functions, data/task balancing, and generalization to new discriminative tasks, such as detection.\\nThese contributions go far beyond reiterating known properties, offering novel insights into the feasibility and potential of unified training across diverse medical domains.\", \"q4\": \"runtime and complexity analysis.\", \"a4\": \"Thanks for the suggestion. For a fair comparison, we report the training cost of TransUnet and our MVG evaluated on 1 A5000 GPU.\\n| Method | Param | Training Cost |\\n|-----------|-------|---------------|\\n| MVG | 370M | 1062h |\\n| TransUnet | 92M | 250h |\\n\\nNote that a generalist usually costs more training time than a specialist. MVG requires more training resources due to 1) the significantly larger input dimensions (our framework uses inputs that are four times larger, including the prompt image/label and task image) during training and 2) the use of a larger network (ViT-Large) compared to standard segmentation methods, such as UNet.\", \"q5\": \"the discrepancy in ACDC Dice scores for UniverSeg between Table 2 in this paper (0.54) and Table 6 in the UniverSeg paper appendix (0.70)?\", \"a5\": \"The discrepancy arises from differences in the testing procedure. In the UniverSeg paper, only the max (slice with the maximum area of the target mask) and mid slices were tested. In our work, we randomly sampled from all slices to create a more comprehensive testing set, resulting in a different testing split.\", \"q6\": \"Is the failure of mask modeling in segmentation tasks due to the use of L1 loss? Have alternative segmentation losses, such as Dice loss, been tested?\", \"a6\": \"Thank you for the insightful question. As noted in Lines 254\\u2013263 of the original manuscript, the limitations of masked image modeling (MIM) in segmentation tasks stem from its focus on recovering local details, which compromises global contextual information crucial for segmentation. This aligns with findings from [4], which show that MIM is better suited for capturing local rather than global information. Since segmentation relies heavily on global context to delineate anatomical structures, this issue should persist regardless of the choice of loss function. To further validate this point, we conducted experiments on the AMOS CT dataset and compared Dice loss and L1 loss. The results indicate similar performance (Dice: 0.48 vs. L1: 0.46).\\n\\n| L1 Loss | 0.46 |\\n|------------|-------|\\n| Dice Loss | 0.48 |\"}", "{\"comment\": \"There appears to be no rebuttal?\"}" ] }
Et0SIGDpP5
Long-context Protein Language Model
[ "Yingheng Wang", "Zichen Wang", "Gil Sadeh", "Luca Zancato", "Alessandro Achille", "George Karypis", "Huzefa Rangwala" ]
Self-supervised training of language models (LMs) has seen great success for protein sequences in learning meaningful representations and for generative drug design. Most protein LMs are based on the Transformer architecture trained on individual proteins with short context lengths. Such protein LMs cannot extrapolate to longer proteins and protein complexes well. They also fail to account for the underlying biological mechanisms carried out by biomolecular interactions and dynamics i.e., proteins often interact with other proteins, molecules, and pathways in complex biological systems. In this work, we propose LC-PLM based on an alternative protein LM architecture, BiMamba-S, built off selective structured state-space models, to learn high-quality universal protein representations at the amino acid token level using masked language modeling. We also introduce its graph-contextual variant, LC-PLM-G, which contextualizes protein-protein interaction (PPI) graphs for a second stage of training. LC-PLM demonstrates favorable neural scaling laws, better length extrapolation capability, and a 7\% to 34\% improvement on protein downstream tasks than Transformer-based ESM-2. LC-PLM-G further trained within the context of PPI graphs shows promising results on protein structure and function prediction tasks. Our study demonstrates the benefit of increasing the context size with computationally efficient LM architecture (e.g. structured SSMs) in learning universal protein representations and incorporating molecular interaction contexts contained in biological graphs.
[ "protein language model" ]
Reject
https://openreview.net/pdf?id=Et0SIGDpP5
https://openreview.net/forum?id=Et0SIGDpP5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z0XBd6k0bT", "yL909kNG26", "xEYIvueAqS", "x9xfWmpwOU", "sMTKYS7SUe", "sE08Fm3pCh", "sDOvST5BWN", "q7eSAEetLN", "mbiH9ZAhx5", "jWJUhf1qXM", "hZpuLkLEJ8", "hOPhOmHCMk", "gw5QFWZdLp", "cwBETHpEKU", "brijYTkf9L", "bfUuqfarz3", "b7y2HTiUqR", "Z5hGpK0ZpP", "TEwb1cqiCW", "RpFmiJEIsv", "MGSH559L6P", "Kpy19HK5ne", "JC3OWeSLe7", "Id4XyXAstr", "IDCQSkmHAJ", "GapBsM8q7k", "FOcfyJ6QMG", "EdNV2oDIW5", "Bj2Av5cR6R", "BP2p8Ks9pB", "AmqPi8r8vp", "A6tx3lWPZ1", "9ZP4M9VrFd", "9NA0MP67CO", "8bz3YUpoYY", "79htpq8ayX", "6RGrOv46yV", "3zu2yV6v6W", "2SuOGczwaL", "2QDxcOwSp4", "1M3NqY7TzU" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732300321369, 1737523935672, 1732903328256, 1733127375415, 1732301223634, 1732831304315, 1732304147947, 1732301205047, 1732298128374, 1730682173753, 1732894875308, 1732826156201, 1732831176918, 1732551399999, 1732298328297, 1732299588639, 1732564674583, 1732553353125, 1732301553220, 1732300848488, 1732551455661, 1732916548741, 1732302029680, 1732298001809, 1732832599525, 1732551247794, 1730615934884, 1732302624962, 1730450871011, 1734837989203, 1732303985359, 1732298583092, 1732301872603, 1732551291781, 1733168277386, 1732831374546, 1732303389001, 1730673966051, 1732302610584, 1732299144058, 1732838686259 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Reviewer_Z2Tu" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Reviewer_Z2Tu" ], [ "ICLR.cc/2025/Conference/Submission8832/Reviewer_Z2Tu" ], [ "ICLR.cc/2025/Conference/Submission8832/Reviewer_sY9s" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Reviewer_jUT5" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Reviewer_8QMc" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Reviewer_jUT5" ], [ "ICLR.cc/2025/Conference/Submission8832/Area_Chair_tRTn" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Reviewer_sY9s" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Authors" ], [ "ICLR.cc/2025/Conference/Submission8832/Reviewer_8QMc" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer 8QMc [3/n]\", \"comment\": \">**5. How did we hold out the test 250K? Do we retrain ESM-2 for Figure 4? What is \\u201cevaluation loss\\u201d?**\\n\\n- Similar to ESMFold [[Lin et al.](https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://www.science.org/doi/10.1126/science.ade2574&ved=2ahUKEwj8yOyRx_CJAxVtD1kFHUdfEgEQFnoECB8QAQ&usg=AOvVaw2uZOq8F6b3Ys4mkbF9t3hQ)], the hold-out 250K sequences are randomly sampled from UniRef90. For the training set, we used `mmseqs` to filter out sequences in the UniRef50 data with >90% sequence identity, such that the remaining sequences in UniRef50 are no more than 90% similar to any of the sequences in the hold-out test set. We provide the details in Appendix B.\\n- Yes, we retrained ESM-2 following the [official recipe](https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://github.com/facebookresearch/esm&ved=2ahUKEwie9OWcx_CJAxWgF1kFHUKaOAwQFnoECAwQAQ&usg=AOvVaw1qbC62hfVOkizvBbKD47EH) for different sizes using the same train set and test set as we used for LC-PLM. \\n- Evaluation loss is the **average cross-entropy across all tokens** (equivalent to [Perplexity (PPL)](https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://en.wikipedia.org/wiki/Perplexity&ved=2ahUKEwiVpZ-_x_CJAxWQGFkFHdLyIUYQFnoECBIQAQ&usg=AOvVaw0U47jF3dhfPmb6wd8SKD30)), which is the standard way that people use to evaluate language models . We added this description to our manuscript.\\n\\n>**6. Align folding trunk in structure prediction.**\\n\\nIn the folding evaluation presented in RQ4, Table2, we are essentially evaluating how well the residue-level embeddings from different pLMs can predict the 3D structures. We perform this evaluation by learning a folding trunk with its architecture adopted from ESMFold for each pLM on paired protein embeddings and ground truth structures, then validate the performance on hold out structure datasets (CASP14, CASP15-multimer, Benchmark2). It is worth noting that the folding chunks for each pLM are trained from scratch rather than transfer learning. This evaluation setting is similar to linear probing, where the parameters in the LMs are frozen and the probing head; in this case, a single folding trunk, is learned by stochastic gradient descent. \\n\\n>**7. Robustness of downsampling.**\\n\\nThanks for this great feedback! We add new results to assess the robustness of downsampling. We perform three 1.5% downsampling using three random seeds and retrain both our LC-PLM and ESM-2. The results are in the table below. We find that the downsampling is very robust to the performance with a small standard deviation that is close to (even smaller than) the standard deviation of using the same train set but training with different random seeds as we reported in our original table.\\n\\n| Models | CASP15-multimers | CASP14 | Benchmark2 |\\n|-------------------------|---------------------|--------------------|--------------------|\\n| ESM-2-650M (100B) | 0.4132 \\u00b1 0.0065 | 0.3437 \\u00b1 0.0039 | 0.4773 \\u00b1 0.0092 |\\n| LC-PLM-790M (100B) | 0.5004 \\u00b1 0.0139 | 0.4244 \\u00b1 0.0053 | 0.6290 \\u00b1 0.0121 |\\n| ESM-2-public (1T, for reference only) | 0.5128 \\u00b1 0.0003| 0.4421 \\u00b1 0.0023| 0.6844 \\u00b1 0.0059|\\n\\n>**8. Usefulness of graph context.**\\n\\nThanks for bringing up this concern! We want to refer to Tables 14, 15, 16, 17 to show that the encoded graph context information can help with protein function prediction and protein interaction prediction. But to make this claim much clearer, we also add more experiments (as shown in the table below) on downstream tasks to verify the effectiveness of the graph contextual training. LC-PLM-G outperforms its vanilla variant on 3/4 TAPE tasks, as shown in the table below. We also want to note that by comparing two LC-PLM-G variants trained on different PPI graphs, the performance also varied a lot, which indicates that the data quality of the PPI graph is also important for the performance boost. We think building up a high-quality PPI graph can be meaningful future work that makes the pretrained pLM better. Regarding the fact that incorporating PPI graphs drastically hurts the performance of ESM-2, this is potentially due to **the poor length extrapolation capability (as shown in Figure 6) and the catastrophic forgetting issue** [[Kenneweg et al.](https://arxiv.org/abs/2404.01317), [Luo et al.](https://arxiv.org/abs/2308.08747)] of Transformers. In Figure 6, we show that if we train ESM-2 on longer sequences, the model will fail to extrapolate on both shorter and longer sequences. Thus, after the second-phase graph context training, ESM-2 forgets the high-quality representations for regular-length protein sequences (shorter than graph-contextualized sequences) learned in the first-phase pretraining and fails to extrapolate on these shorter sequences. It can only provide degenerated representations of them.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer Z2Tu\", \"comment\": \"We sincerely appreciate your advocacy on our behalf. Your support has been invaluable and deeply meaningful to us.\\n\\nIn our revised manuscript, we have incorporated the points we committed to addressing, making updates in both the main text and the appendix. Additionally, we have highlighted the new additions and adjustments, including the definitions and rationales of specific criterions used in Table 1 (see Appendix A from [here](https://drive.google.com/file/d/1LT_6biW4N2I2vbn6LO8keA1wQWHzLqXe/view?usp=sharing) given currently there is a technical issue on OpenReview for updating the PDF file). Please let us know if there is anything that you feel requires further clarification or adjustment.\\n\\nOnce again, we thank you for your insightful feedback. We kindly ask if you could consider increasing your confidence if you feel we have adequately addressed and clarified the concerns.\"}", "{\"comment\": \"I consider confidence as a pure meta-value, how much trust there is in my own review and I don't want to overstate my understanding and knowledge.\"}", "{\"title\": \"Response to Reviewer jUT5 [2/n]\", \"comment\": \"Table A. Protein structure prediction with LMFold. Structure prediction performance (TM score) are reported on different hold out datasets.\\n| Models | CASP15-multimers | CASP14 | Benchmark2 |\\n|-----------------------|-----------------------------------------------|--------------------------------------|------------------------------------|\\n| LC-PLM-790M (100B) | **0.5109 \\u00b1 0.0070** | **0.4154 \\u00b1 0.0080** | **0.6290 \\u00b1 0.0071** |\\n| ProtMamba-public | N/A (ProtMamba cannot run on protein sequences with > 2048 length) | 0.3288 \\u00b1 0.0091 | 0.4515 \\u00b1 0.0062 |\\n\\nTable B. Evaluation on TAPE tasks in supervised fine-tuning setting. We report the top-1 accuracy for the Remote Homology fold-level test set; accuracy for the 3-class secondary structure prediction on the CB513 test set; Spearman\\u2019s correlation coefficients for the test sets for the Stability and Fluorescence prediction tasks. For the Jacobian contact map prediction task, we adopted the methods from [[Zhang et al.](https://www.biorxiv.org/content/10.1101/2024.01.30.577970v1)] to use categorical Jacobian matrices computed from protein language models as the zero-shot prediction for protein contact maps and report the precision@2/L (L is the length of a protein sequence) on the validation set of ProteinNet dataset [[AlQuraishi](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-2932-0)].\\n| Models | PPI Graph | Jacobian Contact Map | Remote Homology | Secondary Structure | Stability | Fluorescence |\\n|-----------------------|-----------------|-----------------------|---------------------|---------------------|-----------------|-----------------|\\n| LC-PLM-790M (100B) | None | 47.1 | 35.14 \\u00b1 1.69 | **85.07 \\u00b1 0.03** | 0.794 \\u00b1 0.003 | 0.692 \\u00b1 0.002 |\\n| LC-PLM-G-790M (100B) | ogbn-proteins | 47.15 | **35.74 \\u00b1 0.93** | 85.02 \\u00b1 0.11 | **0.801 \\u00b1 0.001** | **0.709 \\u00b1 0.033** |\\n| LC-PLM-G-790M (100B) | ogbl-ppa | **47.23** | 35.60 \\u00b1 1.45 | 85.01 \\u00b1 0.03 | **0.801 \\u00b1 0.001** | 0.693 \\u00b1 0.002 |\\n| ProtMamba-public | None | 10.96 | 17.82 \\u00b1 1.85 | 68.43 \\u00b1 0.06 | 0.726 \\u00b1 0.012 | 0.688 \\u00b1 0.005 |\"}", "{\"title\": \"Discussion [2/n]\", \"comment\": \"> 4. Weak motivation of long-context modeling.\\n\\nWe thank the reviewer for this comment. We clarify the biological motivations and needs for long-context modeling of proteins into three perspectives: (1) functional, (2) structural, and (3) evolutionary. (1) many proteins function as part of multi-protein complexes (e.g. transcription factors) and physically or functionally interact with other proteins and molecules. The interaction information is often captured in protein-protein interaction graphs. Knowing the interacting partners of an individual protein is helpful in predicting the protein\\u2019s properties. We demonstrate this on protein function prediction tasks using the ogb-proteins graphs, as shown in Tables 14 and 16. With interaction information, the model shows performance gain. (2) Protein structure depends on global fold, which can involve residues and interactions across long distances and across multiple protein sequences. Modeling multi-protein systems captures distant dependencies critical for stability and function. Folding of multi-meric protein complexes relies on models capable of handling long contexts. We demonstrate this benefit in our LMFold experiments in Table 2. Our model outperforms ESM-2 across all folding benchmarks, especially on CASP15-multimers. (3) Proteins in the same pathway or family exhibit co-evolutionary patterns due to functional interdependencies. In fact, multiple sequence alignment (MSA) of homologous protein sequences is a common approach to increase the contexts for studying individual proteins. As other reviewers noted, ProtMamba [[Sgarbossa et al.](https://www.biorxiv.org/content/10.1101/2024.05.24.595730v1)] is inspired by leveraging MSA as an individual protein\\u2019s context.\\n\\nWe also want to refer to Tables 14, 15, 16, 17 to show that the encoded graph context information can help with protein function prediction and protein interaction prediction. But to make this claim much clearer, we also add more experiments (as shown in the table below) on downstream tasks to verify the effectiveness of the graph contextual training. LC-PLM-G outperforms its vanilla variant on 3/4 TAPE tasks, as shown in the table below. We also want to note that by comparing two LC-PLM-G variants trained on different PPI graphs, the performance also varied a lot, which indicates that the data quality of the PPI graph is also important for the performance boost. We think building up a high-quality PPI graph can be meaningful future work that makes the pretrained pLM better. Regarding the fact that incorporating PPI graphs drastically hurts the performance of ESM-2, this is potentially due to **the poor length extrapolation capability (as shown in Figure 6) and the catastrophic forgetting issue** [[Kenneweg et al.](https://arxiv.org/abs/2404.01317), [Luo et al.](https://arxiv.org/abs/2308.08747)] of Transformers. In Figure 6, we show that if we train ESM-2 on longer sequences, the model will fail to extrapolate on both shorter and longer sequences. Thus, after the second-phase graph context training, ESM-2 forgets the high-quality representations for regular-length protein sequences (shorter than graph-contextualized sequences) learned in the first-phase pretraining and fails to extrapolate on these shorter sequences. It can only provide degenerated representations of them.\"}", "{\"title\": \"General response to all reviewers [2/2]\", \"comment\": \"> **3. Distinctions with between LC-PLM and ProtMamba**\", \"there_are_two_key_distinctions_between_our_method_and_protmamba\": [\"Definition of long contexts for proteins: ProtMamba exclusively use evolutionary contexts of individual proteins in the forms of flattened MSAs, whereas LC-PLM-G primarily use functional and structural contexts of proteins stored in PPI graphs. In fact, LC-PLM-G\\u2019s graph contextual method is more generalizable: one can feed the evolutionary contexts in the form of sequence graphs, an alternative representation of MSA [[Benedict et al. 2014](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4082375)], to train or inference with our LC-PLM-G model. We added these discussions to our manuscript and consider these new directions as future works.\", \"Foundation model vs specialized model: our goal is to build a foundation pLM that can learn meaningful representations for protein sequences and generalize across various downstream tasks, including prediction of protein structures, functions, fitness, and interactions. On the other hand, ProtMamba is trained to use homologous sequences as context for protein generation tasks (e.g. infilling and fitness prediction), rather than producing useful representations for protein sequences.\", \"> **4. Benefit of SSM/Mamba-based architecture**\", \"SSM/Mamba-based models can adapt to long-context training better than Transformer-based architectures\"], \"we_demonstrate_our_lc_plm_has_favorable_adaptability_to_long_context_tuning\": \"We performed three more downstream protein tasks to evaluate ESM2 and LC-PLM models. In our existing and new experiments (compiled in Table B), after performing the 2nd stage graph training, the performance of ESM2 (Transformers) degrades on many downstream tasks including protein fitness prediction in ProteinGym, Contact map prediction, and TAPE stability prediction. On the other hand, our LC-PLM models maintain or slightly improve their performances on these tasks after the 2nd stage graph training. These results suggest it is difficult to tune Transformer models to adapt to longer contexts.\", \"table_b\": \"Evaluation of pLMs before and after 2nd stage graph context training on downstream tasks.\\n\\n| Models | PPI Graph | Jacobian Contact Map | Stability | Fluorescence |\\n|-------------------------|------------------|-----------------------|--------------------|--------------------|\\n| ESM-2-650M (100B) | None | 44.05 | 0.763 \\u00b1 0.008 | 0.695 \\u00b1 0.002 |\\n| ESM-2-G-650M (100B) | ogbn-proteins | 32.35 | 0.750 \\u00b1 0.016 | 0.694 \\u00b1 0.002 |\\n| ESM-2-G-650M (100B) | ogbl-ppa | 26.66 | 0.753 \\u00b1 0.009 | 0.693 \\u00b1 0.001 |\\n| LC-PLM-790M (100B) | None | 47.10 | 0.794 \\u00b1 0.003 | 0.692 \\u00b1 0.002 |\\n| LC-PLM-G-790M (100B) | ogbn-proteins | 47.15 | **0.801 \\u00b1 0.001** | **0.709 \\u00b1 0.003** |\\n| LC-PLM-G-790M (100B) | ogbl-ppa | **47.23** | **0.801 \\u00b1 0.001** | 0.693 \\u00b1 0.002 |\\n\\n- Practical applications that would benefit from efficient pLMs \\n\\nMamba-based models enjoy favorable inference efficiency in terms of both time and space complexity [[Gu & Dao, 2024](https://arxiv.org/abs/2312.00752)]. The inference efficiency is important in computational protein design/drug discovery applications: one usually generate up to 10^6 protein sequences as candidates [[Adolf-Bryfogle et al., 2018](https://pubmed.ncbi.nlm.nih.gov/29702641/)] and pLMs fine-tuned for scoring those designed protein sequences can be used for scoring, ranking, and filtering the designed protein sequences. The per-step constant time complexity of SSMs could be an advantage in accelerating this phase.\"}", "{\"title\": \"Response to Reviewer jUT5 [1/n]\", \"comment\": \"We thank the reviewer for their feedback on our paper. We performed multiple novel ablations and comparisons that we report in the main comment of the rebuttal. We also address the reviewer\\u2019s specific comments and concerns below:\\n\\n>**1. Originality and comparison to ProtMamba.**\\n\\nWe respectfully disagree with the assertion of \\u201cThis method already exists as ProtMamba\\u201d. We would like to clarify several key distinctions:\\n\\n- We have **discussed the key differences between our work and ProtMamba [1] in lines 152-153 and 158-161 and Table 1**. To summarize again, *ProtMamba trained on concatenated homologous protein sequences* with autoregressive causal language modeling and an infilling objective *focusing on protein sequence generation*. However, our LC-PLM focuses on *learning a foundation-level long context protein language model (pLM)* that can provide universal amino acid level protein representations for extremely long protein sequences; protein complexes, multimers, and heterodimers with encoded protein interaction context information. We would like to **demonstrate that LC-PLM is a much better foundation pLM than the previous open-sourced SOTA foundation pLMs (i.e. ESM-2 [[Lin et al.](https://www.biorxiv.org/content/10.1101/2022.07.20.500902v1.full.pdf)] and CARP [[Yang et al.](https://www.biorxiv.org/content/10.1101/2022.05.19.492714v2.full.pdf)])**. We additionally argue that, unlike ProtMamba [[Sgarbossa et al.](https://www.biorxiv.org/content/10.1101/2024.05.24.595730v1)], which uses homologous sequences as context, our LC-PLM-G proposed a novel way to encode protein interaction graph information that contextualizes functionally related proteins rather than semantically similar ones.\\n\\n- Here we provide new experimental results directly comparing our LC-PLM to ProtMamba. In our new experiments, we evaluate ProtMamba on the downstream tasks we used in our paper. **The performance of ProtMamba is much lower than LC-PLM (even CARP and ESM-2 as shown in general response) across all tasks**. This suggests that ProtMamba pretrained with concatenated homologous protein sequences potentially leads to degraded representations of individual protein sequences. After all, ProtMamba is trained to use homologous sequences as context for protein generation tasks (e.g. infilling and fitness prediction), rather than producing useful representations for protein sequences. Also it is worth noting that **ProtMamba cannot extrapolate to sequence > 2048** since they use positional encodings with fixed length in training. We summarized the results in the tables below. We added these results to our manuscript. Also as a side note, [ProtMamba](https://openreview.net/forum?id=BMfHO2lXGe) is a concurrent submission to ICLR 2025 so other submissions can be excused for not comparing against them according to [ICLR\\u2019s policy](https://iclr.cc/Conferences/2025/FAQ).\"}", "{\"title\": \"Response to Reviewer sY9s [2/n]\", \"comment\": \"Table A. Protein structure prediction with LMFold. Structure prediction performance (TM score) are reported on different hold out datasets.\\n| Models | CASP15-multimers | CASP14 | Benchmark2 |\\n|-----------------------|-----------------------------------------------|--------------------------------------|------------------------------------|\\n| LC-PLM-790M (100B) | **0.5109 \\u00b1 0.0070** | **0.4154 \\u00b1 0.0080** | **0.6290 \\u00b1 0.0071** |\\n| ProtMamba-public | N/A (ProtMamba cannot run on protein sequences with > 2048 length) | 0.3288 \\u00b1 0.0091 | 0.4515 \\u00b1 0.0062 |\\n\\nTable B. Evaluation on TAPE tasks in supervised fine-tuning setting. We report the top-1 accuracy for the Remote Homology fold-level test set; accuracy for the 3-class secondary structure prediction on the CB513 test set; Spearman\\u2019s correlation coefficients for the test sets for the Stability and Fluorescence prediction tasks. For the Jacobian contact map prediction task, we adopted the methods from [[Zhang et al.](https://www.biorxiv.org/content/10.1101/2024.01.30.577970v1)] to use categorical Jacobian matrices computed from protein language models as the zero-shot prediction for protein contact maps and report the precision@2/L (L is the length of a protein sequence) on the validation set of ProteinNet dataset [[AlQuraishi](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-2932-0)].\\n| Models | PPI Graph | Jacobian Contact Map | Remote Homology | Secondary Structure | Stability | Fluorescence |\\n|-----------------------|-----------------|-----------------------|---------------------|---------------------|-----------------|-----------------|\\n| LC-PLM-790M (100B) | None | 47.1 | 35.14 \\u00b1 1.69 | **85.07 \\u00b1 0.03** | 0.794 \\u00b1 0.003 | 0.692 \\u00b1 0.002 |\\n| LC-PLM-G-790M (100B) | ogbn-proteins | 47.15 | **35.74 \\u00b1 0.93** | 85.02 \\u00b1 0.11 | **0.801 \\u00b1 0.001** | **0.709 \\u00b1 0.033** |\\n| LC-PLM-G-790M (100B) | ogbl-ppa | **47.23** | 35.60 \\u00b1 1.45 | 85.01 \\u00b1 0.03 | **0.801 \\u00b1 0.001** | 0.693 \\u00b1 0.002 |\\n| ProtMamba-public | None | 10.96 | 17.82 \\u00b1 1.85 | 68.43 \\u00b1 0.06 | 0.726 \\u00b1 0.012 | 0.688 \\u00b1 0.005 |\"}", "{\"summary\": \"The authors suggest protein language models, denoted as LC-PLM and LC-PLM-G, respectively.\\nLC-PLM is based on a Mamba-based architecture, which they call BiMamba-S.\\nTwo main ideas of BiMamba-S are bidirectionality and shared projection layers for forward and flipped inputs.\\nSharing of layers allows deeper layers, since the number of parameters is reduced.\\nThe authors also suggest an extension for knowledge graphs, which leads to LC-PLM-G.\\nGraphs are represented as random walks between nodes, where nodes are protein sequences themselves and edges are indicated by an EDGE token.\\nThey also use negative random walk samples, where there is no edge between nodes and mark it with a special NO_EDGE token.\\nThe authors apply their methods/models to several downstream task and find superiority of their method over trained versions of ESM2.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"relevant topic\", \"good empirical results\", \"innovative idea how to integrate knowledge graphs into the models\", \"well-written paper\"], \"weaknesses\": [\"It would be interesting to see the performance of ProtMamba to be included in comparisons, where it makes sense.\", \"It would be interesting to additionally see the performance of Contact Prediction, Fluorescence and Stability on the TAPE tasks.\", \"Criterions for Table 1 should be better specified (e.g., when is a method considered to be universal?)\", \"line 101: here also \\\"Hochreiter, S. (1991). Untersuchungen zu dynamischen neuronalen Netzen\\\" should be cited.\"], \"questions\": [\"Which context size do you use for ESM-2? Could it have been increased for the experiments you carried out (i.e., did ESM2 and LC-PLM use approximately the same memory?)?\", \"What hyperparameters did you use for ESM-2? How was hyperparameter selection done for ESM-2 and LC-PLM(-G)?\", \"RQ3:\\\\\", \"In contrast to some other experiments UniRef50 is used for training. Evaluation is at UniRef90. Why is this the case?\", \"line 462-464: \\\"As shown in Figure 7, the embeddings from LC-PLM-G captures the graph topology much better than LC-PLM, which aligns with the community detection results.\\\"\\\\\", \"What is the criterion to see that LC-PLM-G captures the graph topology much better?\", \"line 439-440: \\\"This also suggests that, even for average-length protein sequences, long-range dependencies would be useful information and an important feature for protein structure prediction.\\\"\\\\\", \"What is the exact indication to come to this conclusion?\", \"line 533: \\\"LC-PLM achieved 7% to 34% better performance on various downstream tasks.\\\" Which task do you exactly mean with 7% and which with 34%?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"First remarks:\\n- I have to say, that I did NOT read the paper again and also did not carefully check, whether all the points the authors promised are now included.\\n- I have roughly looked over the other reviews and the responses, but not in full detail.\\n\\nwith respect to point 3)\\nI am not sure, whether I could find the clarification in the uploaded PDF as they authors did not seem to mark their changes with another color.\\nEspecially (with respect to point 3), I did not only mean, that the criterion \\\"universality\\\" needs more accurate definition, but also all the other criterions (Fine granularity, Handleability, Performance, Graph context, Large-scale model). What is the **exact** criterion for a check mark and what is the **exact** criterion that there is no check mark?\\nI think this needs to be clarified, as otherwise it might be unfair to other methods. Please describe as precisely as possible.\\n\\nI will keep my score, as I think it is sufficiently high. I however think the area chair should possibly form their own opinion by having a look through the paper and the discussions here and decide whether other reviewers might have given a too low score.\"}", "{\"title\": \"Reviewer's Answer to the Authors\", \"comment\": \"Dear Authors,\\n\\nThank you for your detailed answers. In the following you will find some follow-up thoughts: \\n\\n**Novelty**: \\nTo improve clarity regarding the fact that BiMamba-S is not a novel architectural component but was introduced previously and applied in this work to another domain, you might consider moving the BiMamba-S section to the Preliminaries section.\\n\\n**Goal of Learning a Long-Context Foundation pLM**:\\nThank you for clarifying the focus of your work. With this context in mind, I believe the manuscript requires improvements in clarity, motivation, and focus:\\n\\na) Clarity and Motivation: You should define what properties should a protein foundation model have and how could you test them. I guess, showing downstream task performances to demonstrate representation learning capabilities makes sense, just the motivation is missing.\\n\\nb) Universal Representation Claim: The manuscript states that LC-pLM \\u201clearns universal AA token-level protein representations\\u201d (line 161). However, this claim is vague, and the term \\u201cuniversal\\u201d is not well-defined.\\n\\nc) Long-Context Motivation: I agree with reviewer 8QMc\\u2019s initial comment that the necessity for long-context capabilities isn\\u2019t well motivated in the manuscript, despite the fact that Figure 6 is relevant and interesting, and implicitly shows the foundation models\\u2019 need to be able to adapt to larger sequences.\\n\\nd) Multi-Modal Foundation Models: The authors state in one of the review answers that they want to focus on pure sequence foundation models. Will trained models be relevant though? Wouldn\\u2019t people just use multi-model models like ESM-3? I guess, one cannot expect the proposed method matches ESM-3 performance. However, it would be important to know how significant differences are.\\n\\nI appreciate the performance gains demonstrated by LC-pLM in the experiments. However, my concerns regarding architectural novelty remain. If the primary focus is to develop \\u201cprotein foundation models,\\u201d the manuscript would benefit from significant restructuring and rewriting to reflect that focus more clearly.\\n\\nFor these reasons, I will stick to my initial rating.\"}", "{\"title\": \"Discussion [1/n]\", \"comment\": \"Thank you for your feedback. Here we provide further discussion.\\n\\n> 1. No novelty of BiMamba-S; Moving it to Prelim.\\n\\nWe respectfully disagreed. \\n\\n(1) Note that, as we mentioned in our rebuttal, Cadeucus has similar architectural design choice that is **not exactly same**. Here are several key differences: \\n- \\\"shared layers\\\" is not qual to \\\"tied weights\\\". (a) \\\"Shared layers\\\" are more efficient since they reuse the same layer object, whereas weight tying tracks separate layers with identical weights, introducing minor overhead. (b) \\\"Shared layers\\\" simplify optimization as they operate on a single computational graph, while weight tying requires additional constraints to enforce equality during training.\\n- We have another design choice \\\"untied input/output embedding layers\\\" and we carefully study it. This can help alleviate collapsed embedding space and improve the uniformity of learned embeddings, which make the model more expressive and avoid *anisotropic* issue.\\n- We perform the reverse operation after a normalization layer and we provide a residual connection at the end of the block to help gradient flow.\\n\\n(2) Also note that Caduceus is not the first one proposing to use bidirectional Mamba (BiMamba). There has been a lot of works built off BiMamba, just as we discussed in our manuscript -- \\\"... time-series forecasting (Liang et al., 2024), audio representation learning (Erol et al., 2024), visual representation learning (Zhu et al., 2024), DNA modeling (Schiff et al., 2024), and graph learning (Behrouz & Hashemi, 2024).\\\" These works came out on various dates but all discussed their BiMamba architecture designs in their model/methodology. We think it's worth to also discuss our own architectural designs in the method part to indicate the similar insights and key differences in a totally different field.\\n\\n> 2. Clarity and motivation.\\n\\nNote that we're not proposing a *protein foundation model*; instead, we're introducing another **foundation-level protein language model** (pLM). We strongly recommend the reviewer to read ESM-2 (Lin et al.) to get the motivation of having a foundation-level pLM if they're not familiar with this field and think the motivation is missing. \\n\\nAlso, the community hasn't had a clear definition of \\\"foundation model\\\" and there has been a long debate on this. Thus, it's also not easy for us to define its necessary properties and claim we achieved \\\"foundation\\\" in our work. Given these, we choose to avoid this term but focus on demonstrating we have a better pLM compared to existing works. As we introduced in the manuscript, we think we achieved this by the following evidence:\\n- *We demonstrate that LC-PLM has improved length extrapolation capabilities, favorable scaling laws, and achieved a 7% to 34% improvement on downstream tasks (e.g. protein structure prediction (CASP15-multimers, CASP14, Benchmark2), tasks in TAPE and ProteinGym) compared to ESM-2, especially for longer proteins and protein complexes.*\\n- *To encode biological interaction information, we propose a novel second-stage training based on random walks over graphs to extend the long-context capabilities of LC-PLM to leverage the PPI graph context. We demonstrate its effectiveness in capturing graph-contextual information on remote homology detection (TAPE), protein function prediction (ogbn-proteins), and PPI link prediction (ogbl-ppa).*\\n\\n> 3. \\\"Universal Representation\\\" is not well-defined.\\n\\nWe already addressed this concern in our reply to Review jUT5. For your convenience, we restate here again:\\n\\nWe use the term \\u201cuniversal\\u201d since (1) this term has been widely-used in protein representation learning literature [[Alley et al.](https://pubmed.ncbi.nlm.nih.gov/31636460/), [Detlefsen et al.](https://www.nature.com/articles/s41467-022-29443-w)], where researchers define pLM is *\\u201c[learning universal, cross-family representations of protein space](https://www.nature.com/articles/s41467-022-29443-w)\\u201d*; and (2) we pretrain our models on the **Universal** Protein Reference Clusters (UniRef) dataset which contains universal protein sequence resources. Given such learned high-quality protein embeddings, we can achieve decent performance across a variety of downstream tasks, which demonstrates the universality of such protein representations.\"}", "{\"title\": \"Kindly reminder of the end of rebuttal\", \"comment\": \"Dear Reviewer 8QMc,\\n\\nAs the discussion period is coming to an end tomorrow, we kindly ask you to review our response to your comments and let us know if you have any further queries. Alternatively, if you think we addressed your concerns properly and could raise the rating of the paper, we would be extremely grateful. We eagerly anticipate your response and are committed to addressing any remaining concerns before the discussion period concludes.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer sY9s [3/n]\", \"comment\": [\"We agreed that Cadeucus [[Schiff et al.](https://arxiv.org/abs/2403.03234)] used a similar architectural design choice and we did cite them and other papers using BiMamba in lines 219-221. However, we want to note that we used BiMamba-S since it is the most proper way to realize bi-directionality in Mamba. **BiMamba-S can be formulated in a more theoretical way as structured SSMs with quasi-separable matrices [[Hwang et al.](https://arxiv.org/abs/2407.09941)]**. We did not apply quasi-separable mixers in our model since Mamba currently has a much **better software-hardware interface environment and distributed training support** in practical implementations that help us train a large foundation-level model feasibly and efficiently. We added this discussion in our paper as well.\", \"We also want to note that, just like Transformer, which has been used in numerous impactful works such as GPT and ESM-2, BiMamba-S also serves as a versatile and effective architectural choice for long-context sequence modeling. The reuse of proven architectures like Transformer or BiMamba does not diminish the novelty of a work. Instead, it allows researchers to focus on solving domain-specific challenges (in our case, exploring the capability of building up a new type of foundation pLM based on BiMamba-S). Our work follows this principle, leveraging this architecture to demonstrate its effectiveness in pushing the boundaries of foundation pLM.\", \"Besides the architectural choice based on Mamba, we have **many other important contributions to this field as summarized in the Introduction**: (1) method to encode biological interaction information into pLM, we propose a novel second-stage training based on random walks over graphs to extend the long-context capabilities of LC-PLM to leverage the PPI graph context; (2) we demonstrate that LC-PLM has improved length extrapolation capabilities, favorable scaling laws, and achieved a 7% to 34% improvement on downstream tasks (e.g. protein structure prediction (CASP15-multimers, CASP14, Benchmark2), tasks in TAPE and ProteinGym) compared to ESM-2, CARP, especially for longer proteins and protein complexes; (3) we demonstrate its effectiveness in capturing graph contextual information on remote homology detection (TAPE) in Table 3, protein function prediction (ogbn-proteins) in Table 14 and 16, and PPI link prediction (ogbl-ppa) in Table 15 and 17.\", \"We want to highlight that our goal is to build up a foundation pLM that can learn meaningful universal amino acid level representations for **pure** protein sequences and generalize across various downstream tasks. That said, for ProteinGym and etc., it is not fair to compare our method to the other ad-hoc methods built off pre-trained pLMs. For example, on the ProteinGym leaderboard, SOTA methods like PoET [[Truong Jr et al.](https://arxiv.org/pdf/2306.06156)], TranceptEVE [[Notin et al.](https://openreview.net/forum?id=l7Oo9DcLmR1)] rely on combining family-specific models or alignment-based methods with a foundation pLM; SaProt [[Su et al.](https://www.biorxiv.org/content/10.1101/2023.10.01.560349v5)] is a pretrained pLM with massive protein structure data. We added these SOTA methods to our table but **we want to emphasize again that our work is to develop a foundation pLM (i.e. LC-PLM and LC-PLM-G) that can be utilized as a better pLM backbone in all these works**. Such ad-hoc design built off the base pLM to improve on some specific task is beyond the scope of our paper. Our original table **only compares foundation pLMs trained on protein sequences alone**.\"]}", "{\"title\": \"Response to Reviewer 8QMc [2/n]\", \"comment\": [\">**3. Compute-efficient Transformers.**\", \"As we discussed in the above first and second points, our main goal is not to design a compute-efficient sequence modeling architecture. We emphasize other perspectives that SSMs / linear RNNs over Transformers in learning a pLM with better performances. This compute-efficient feature of SSMs is a gifted advantage that we naturally gain from using such a design choice. We also want to note that Flash Attention is a hardware-efficient implementation of Transformers that **cannot reduce the training and inference time complexity of Transformers**. It only provides faster implementation in terms of wall-clock time. Mamba currently also has decent hardware-efficient implementation [[Dao & Gu](https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://arxiv.org/abs/2405.21060&ved=2ahUKEwiEuaD_xfCJAxU1EVkFHcdFIUUQFnoECBoQAQ&usg=AOvVaw2pz2eYNg_qM7rYXQYQIIqH),[Megatron](https://github.com/NVIDIA/Megatron-LM/blob/main/pretrain_mamba.py)] such as parallelized associative scan [[Gu & Dao](https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://arxiv.org/abs/2312.00752&ved=2ahUKEwjK6p2axvCJAxUQMlkFHYBLAyoQFnoECAwQAQ&usg=AOvVaw3crj6SFh5WpnEaozDiZhbi)] to make it more efficient in terms of wall-clock time. Other efficient Transformers that leverage approximate attention mechanisms (e.g. linear attention) are essentially a reformulation of linear RNNs/SSMs [[Katharopoulos et al.](https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://arxiv.org/abs/2006.16236&ved=2ahUKEwiPr4PJxfCJAxVxEVkFHRigOSEQFnoECBUQAQ&usg=AOvVaw2btv9SP2yqT9dOEVjuLQ0G), [Yang et al.](https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://openreview.net/pdf%3Fid%3Dia5XvxFUJT&ved=2ahUKEwiPr4PJxfCJAxVxEVkFHRigOSEQFnoECCcQAQ&usg=AOvVaw2HieVMQ_M3Z209OmN0w5o3)].\", \"We also want to note that there exist many decent linear RNNs/Transformers architectures in the community but our work is not debating on selecting/tailoring a specific architecture. Just like how Transformers have been used in numerous impactful works such as GPT and ESM-2, BiMamba-S also serves as a versatile and effective architectural choice for sequence modeling. The reuse of proven architectures like Transformer or BiMamba does not diminish the novelty of a work. Instead, it allows researchers to focus on solving domain-specific challenges (in our case, exploring the capability of building up a new type of foundation pLM based on BiMamba-S). Our work follows this principle, leveraging this architecture to demonstrate its effectiveness in pushing the boundaries of foundation pLM.\", \"Although compute-efficient architecture is not our focus, we also want to argue that there is indeed practical demand for compute-efficient pLMs as we discussed above. The inference efficiency of pLMs is important in computational protein design/drug discovery applications: one usually generate up to $10^6$ protein sequences as candidates [[Adolf-Bryfogle et al., 2018](https://pubmed.ncbi.nlm.nih.gov/29702641/)] and pLMs fine-tuned for scoring those designed protein sequences can be used for scoring, ranking, and filtering the designed protein sequences. The per-step constant time complexity of SSMs could be an advantage in accelerating this phase.\", \">**4. Why not other SOTAs?**\", \"We want to highlight that our goal is to build up a foundation pLM that can learn meaningful universal amino acid level representations for **pure** protein sequences and generalize across various downstream tasks. As you pointed out, our well-trained pLM **can be used as a drop-in replacement for ESM-2 to accommodate any of these existing methods on their tasks**. In fact, we demonstrated this in practice with the LMFold experiments (a generalization of ESMFold), where a simple folding trunk is trained to predict protein structures based on pLM embeddings. We only show the comparison against existing SOTA open-sourced pLMs trained on protein sequences alone to avoid confounding gains resulting from ad-hoc methods built off pre-trained pLMs. For example, on the ProteinGym leaderboard, SOTA methods like PoET [[Truong Jr et al.](https://arxiv.org/pdf/2306.06156)], TranceptEVE [[Notin et al.](https://openreview.net/forum?id=l7Oo9DcLmR1)] rely on combining family-specific models or alignment-based methods with a foundation pLM; SaProt [[Su et al.](https://www.biorxiv.org/content/10.1101/2023.10.01.560349v5)] is a pretrained pLM with massive protein structure data. We added these SOTA methods to our table but **we want to emphasize again that our work is to develop a foundation pLM (i.e. LC-PLM and LC-PLM-G) that can be utilized as a better pLM backbone in all these works**. And we believe these methods built off our LC-PLM have a high probability of outperforming ESM-2 given all the evidence we provided in our work.\"]}", "{\"title\": \"Reply to Reviewer jUT5\", \"comment\": \"Thank you for your feedback. We hope you were able to review our response and acknowledge that we did perform additional experiments as requested.\\n\\n> The differences to protmamba are just in the collection of training; the method is identical to ProtMamba; the adaption has been done in ProtMamba.\\n\\nWe have tons of differences/contributions more than ProtMamba that we discussed in the rebuttal (https://openreview.net/forum?id=Et0SIGDpP5&noteId=q7eSAEetLN) and our paper. We summarized again as follows:\\n- We introduced a novel method to encode biological interaction information into pLM, i.e. a novel second-stage training based on random walks over graphs to extend the long-context capabilities of LC-PLM to leverage the PPI graph context; we discussed this in the rebuttal (https://openreview.net/forum?id=Et0SIGDpP5&noteId=q7eSAEetLN) and our paper;\\n- In terms of architectural design, we carefully studied (1) bidirectionally of Mamba blocks, (2) shared projection layers in bidirectional Mamba blocks; (3) untied input/output embedding layers to improve the uniformity of embeddings, which are what ProtMamba didn\\u2019t study; we discussed this in our paper;\\n- We demonstrate that LC-PLM has improved length extrapolation capabilities, favorable scaling laws, and achieved a 7% to 34% improvement on downstream tasks (e.g. protein structure prediction (CASP15-multimers, CASP14, Benchmark2), tasks in TAPE and ProteinGym) compared to ESM-2, CARP, especially for longer proteins and protein complexes; we discussed this in the rebuttal (https://openreview.net/forum?id=Et0SIGDpP5&noteId=q7eSAEetLN) and our paper;\\n- We demonstrate its effectiveness in capturing graph contextual information on remote homology detection (TAPE) in Table 3, protein function prediction (ogbn-proteins) in Table 14 and 16, and PPI link prediction (ogbl-ppa) in Table 15 and 17; we discussed this in the rebuttal (https://openreview.net/forum?id=Et0SIGDpP5&noteId=q7eSAEetLN) and our paper;\\n- In terms of pretraining data used for the 2nd stage, we believe graph of sequences is a more generalizable data structure as it subsume 1) set of sequences and 2) sequence of sequences; we discussed this in the rebuttal (https://openreview.net/forum?id=Et0SIGDpP5&noteId=9ZP4M9VrFd) and our paper;\\n- Also as a side note, ProtMamba (https://openreview.net/forum?id=BMfHO2lXGe) is a concurrent submission to ICLR 2025 and just an online preprint so other submission\\u2019s contributions should not be undermined according to ICLR\\u2019s policy (https://iclr.cc/Conferences/2025/FAQ); we mentioned this in the rebuttal (https://openreview.net/forum?id=Et0SIGDpP5&noteId=q7eSAEetLN); Despite this, we added direct comparison to them in the rebuttal and our method is beating ProtMamba across all tasks (e.g. outperforming ProtMamba by up to 331% on contact map), as shown in the Tables A and B of the rebuttal (https://openreview.net/forum?id=Et0SIGDpP5&noteId=sMTKYS7SUe; and https://openreview.net/forum?id=Et0SIGDpP5&noteId=JC3OWeSLe7), and Tables 2 and 3 in our revised manuscript.\\n\\n> ProtMamba is a foundation model.\\n\\nWe disagreed. Since ProtMamba is pretrained with concatenated homolougous sequences from MSA, it is expected to perform well on tasks where the inputs are a set of homologous sequences, such as deep mutational scan data. However, as our experiments show (in Tables A and B from our rebuttal, in Tables 2 and 3 from our revised manuscript), it suffers on other tasks where the input format is individual proteins, including four TAPE tasks and protein structure prediction tasks. A foundation level-protein model should perform well on all these protein property prediction tasks and can be used as a building block for specialized tasks such as mutational fitness prediction (e.g. [TransceptEVE](https://www.biorxiv.org/content/10.1101/2022.12.07.519495v1.full.pdf)) and structure prediction (e.g. [ESMFold](https://www.science.org/doi/10.1126/science.ade2574)).\\n\\n> Tenuous claim \\u201cProtMamba cannot extrapolate to sequence > 2048\\u201d.\\n\\nAs we pointed out in our rebuttal, ProtMamba used positional encodings (PEs) (https://github.com/Bitbol-Lab/ProtMamba-ssm/blob/8befff756b2db7b6dc56d0a07163eb02e27b2731/ProtMamba_ssm/modules.py) which makes it unable to extrapolate to sequences > 2048 on any downstream tasks that need fine-tuning on longer sequences. Also ProtMamba used learnable PE (https://github.com/Bitbol-Lab/ProtMamba-ssm/blob/8befff756b2db7b6dc56d0a07163eb02e27b2731/ProtMamba_ssm/modules.py#L464) as an improper design choice that will do harm to the length extrapolation capability of the model since such PE has been demonstrated as an un-extrapolatable PE in many literature [[Zhao et al.](https://arxiv.org/abs/2312.17044); [Sun et al.](https://aclanthology.org/2023.acl-long.816.pdf)].\\n\\nWe respectfully ask the reviewer to **stop using terms like \\u201ctenuous claims\\u201d unless they carefully read our paper and rebuttal**. Have a great day.\"}", "{\"title\": \"Discussion\", \"comment\": \"The differences to ProtMamba are just in the collection of training data and -- as the authors state -- in the focus on a foundation-level long context protein model. Also ProtMamba can be seen as a foundation level-protein model. The approach is still almost identical to ProtMamba (pre-print available since May). The adaption of a general architecture (such as Transformer, Mamba, etc) to a new application domain would be a relevant application paper, but this step has already been done. The rebuttal has again tenuous claims such as \\\"ProtMamba cannot extrapolate to sequence > 2048\\\", where it is clearly shown to perform well even for context sizes of 2^17. The claimed differences are overall too minor to represent an new ML application.\\n\\nI appreciate the authors enthusiasm about their method and the effort that went into this rebuttal, but I decide to keep my score.\"}", "{\"title\": \"Response to Reviewer jUT5 [3/n]\", \"comment\": \"- Also, we want to note that unlike ProtMamba, we propose to use BiMamba-S, which is the most proper way to realize bi-directionality in Mamba. **BiMamba-S can be formulated in a more theoretical way as structured SSMs with quasi-separable matrices [[Hwang et al.](https://arxiv.org/abs/2407.09941)]**. We did not apply quasi-separable mixers in our model since **Mamba currently has a much better software-hardware interface environment and distributed training support in practical implementations that help us train a large foundation-level model feasibly and efficiently**. We added this discussion in our paper as well. We also want to argue that, just like Transformer, which has been used in numerous impactful works such as GPT and ESM-2, BiMamba-S also serves as a versatile and effective architectural choice for sequence modeling. The reuse of proven architectures like Transformer or BiMamba does not diminish the novelty of a work. Instead, it allows researchers to focus on solving domain-specific challenges (in our case, exploring the capability of building up a new type of foundation pLM based on BiMamba-S). Our work follows this principle, leveraging this architecture to demonstrate its effectiveness in pushing the boundaries of foundation pLM.\\n\\n- Besides the architectural choice based on Mamba, we have **many other important contributions to this field as summarized in the Introduction**: (1) method to encode biological interaction information into pLM, we propose a novel second-stage training based on random walks over graphs to extend the long-context capabilities of LC-PLM to leverage the PPI graph context; (2) we demonstrate that LC-PLM has improved length extrapolation capabilities, favorable scaling laws, and achieved a 7% to 34% improvement on downstream tasks (e.g. protein structure prediction (CASP15-multimers, CASP14, Benchmark2), tasks in TAPE and ProteinGym) compared to ESM-2, CARP, especially for longer proteins and protein complexes; (3) we demonstrate its effectiveness in capturing graph contextual information on remote homology detection (TAPE) in Table 3, protein function prediction (ogbn-proteins) in Table 14 and 16, and PPI link prediction (ogbl-ppa) in Table 15 and 17.\\n\\n>**2. Significance.**\\n\\nWe respectfully disagree with the assertion of \\u201cThe significance of this work is also diminished by many false and tenuous claims\\u201d.\\n\\n- *\\\"Such protein LMs cannot extrapolate to longer proteins [..]\\\": this is not true, and cannot and has not been shown theoretically nor empirically. Transformer-based pLMs can extrapolate to longer contexts and can also handle relatively long contexts (e.g. 16k context size).*\\n\\nSince transformers are invariant to input position, they usually use positional encodings along with the sequence input. These encodings can be difficult to extend past the maximum length seen during training. Most transformer-based foundation pLMs limit the input length during pretraining such that they cannot extrapolate during inference. For example, ESM-2 has a maximum input length of 1024 residues; and it performs worse on both shorter and longer sequences [https://github.com/facebookresearch/esm/discussions/76]. We also provide evidence in Table 6 that ESM-2 fails to extrapolate on both longer (2k-8k) and shorter (0-128) sequences. It is worth noting that ESM-2 even used extrapolatable positional encodings [[Zhao et al.](https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://arxiv.org/abs/2312.17044&ved=2ahUKEwjfuqnNzfCJAxWAMlkFHeDgABUQFnoECBYQAQ&usg=AOvVaw0fldeIoSBJQd5fNgnKjOAN)], i.e. RoPE [[Su et al.](https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://arxiv.org/abs/2104.09864&ved=2ahUKEwi08O3jzfCJAxWgGlkFHQIeNgkQFnoECAwQAQ&usg=AOvVaw2CO33TcxkzYh5uPFyjwCsf)], which however performs poorly for protein sequences. CARP as a new type of attention-free foundation pLM increased the input sequence length to 4k but performs worse than ESM-2 due to its lack of expressiveness. Our LC-PLM built off BiMamba-S can take in infinite-length input sequences and extrapolate well on both shorter and longer sequences; and meanwhile provide even better performance on regular length proteins across various downstream tasks.\"}", "{\"title\": \"Response to Reviewer 8QMc [4/n]\", \"comment\": \"| Models | PPI Graph | Jacobian Contact Map | Remote Homology | Secondary Structure | Stability | Fluorescence |\\n|-----------------------|-----------------|-----------------------|---------------------|---------------------|-----------------|-----------------|\\n| LC-PLM-790M (100B) | None | 47.1 | 35.14 \\u00b1 1.69 | **85.07 \\u00b1 0.03** | 0.794 \\u00b1 0.003 | 0.692 \\u00b1 0.002 |\\n| LC-PLM-G-790M (100B) | ogbn-proteins | 47.15 | **35.74 \\u00b1 0.93** | 85.02 \\u00b1 0.11 | **0.801 \\u00b1 0.001** | **0.709 \\u00b1 0.033** |\\n| LC-PLM-G-790M (100B) | ogbl-ppa | **47.23** | 35.60 \\u00b1 1.45 | 85.01 \\u00b1 0.03 | **0.801 \\u00b1 0.001** | 0.693 \\u00b1 0.002 |\\n\\n>**9. Concrete modeling examples with biological significance that we have to turn to Mamba because the attention/Transformer sucks or even fails to handle the problem at all? Can the authors provide further evidence that the extended context window length improves the \\u201crelated\\u201d downstream protein tasks beyond the results presented?**\\n\\n- We appreciate this comment and address reviewer\\u2019s concern in twofold: \\n\\n--- (1) LC-PLM has favorable adaptability to long-context tuning: We performed three more downstream protein tasks to evaluate ESM2 and LC-PLM models. In our existing and new experiments (compiled in Table A), after performing the 2nd stage graph training, the performance of ESM2 (Transformers) degrades on many downstream tasks including protein fitness prediction in ProteinGym (Table 4, Spearman\\u2019s rho drop from 0.295 to 0.109), Contact map prediction (New table below, precision drops from 44.1 to 26.7), and TAPE stability prediction (New table below, Spearman\\u2019s rho drop from 0.763 to 0.750). On the other hand, our LC-PLM models maintain or slightly improve their performances on these tasks after the 2nd stage graph training. These results suggest it is difficult to tune Transformer models to adapt to longer contexts. \\n\\n--- (2) Mamba-based models enjoy favorable inference efficiency in terms of both time and space complexity. This will satisfy the practical demands for in-silico protein design, where one needs to screen $10^6$ sequences using pLM based methods. The constant time complexity of Mamba/SSMs could be an advantage in accelerating this phase. There is also a GPU memory constraint in performing inference with the Transformer/ESM2 model on long protein sequences users of the ESM2 model have been facing [[issue1](https://github.com/facebookresearch/esm/issues/21), [issue2](https://github.com/facebookresearch/esm/issues/49)].\", \"table_a\": \"Evaluation of pLMs before and after 2nd stage graph context training on downstream tasks. For the Jacobian contact map prediction task, we adopted the methods from [[Zhang et al.](https://www.biorxiv.org/content/10.1101/2024.01.30.577970v1)] to use categorical Jacobian matrices computed from protein language models as the zero-shot prediction for protein contact maps and report the precision@2/L (L is the length of a protein sequence) on the validation set of ProteinNet dataset [[AlQuraish](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-2932-0)]. We report the Spearman\\u2019s correlation coefficients for the test sets for the TAPE Stability prediction tasks and ProteinGym DMS substitutions benchmarks.\\n\\n| Models | PPI Graph | Jacobian Contact Map | TAPE Stability | ProteinGym DMS substitutions |\\n|-----------------------|-----------------|-----------------------|--------------------|-----------------------------|\\n| ESM-2-650M (100B) | None | 44.05 | 0.763 \\u00b1 0.008 | 0.295 \\u00b1 0.013 |\\n| ESM-2-G-650M (100B) | ogbn-proteins | 32.35 | 0.750 \\u00b1 0.016 | 0.109 \\u00b1 0.013 |\\n| ESM-2-G-650M (100B) | ogbl-ppa | 26.66 | 0.753 \\u00b1 0.009 | 0.131 \\u00b1 0.014 |\\n| LC-PLM-790M (100B) | None | 47.1 | 0.794 \\u00b1 0.003 | 0.378 \\u00b1 0.008 |\\n| LC-PLM-G-790M (100B) | ogbn-proteins | 47.15 | **0.801 \\u00b1 0.001** | **0.380 \\u00b1 0.008** |\\n| LC-PLM-G-790M (100B) | ogbl-ppa | **47.23** | **0.801 \\u00b1 0.001** | **0.380 \\u00b1 0.008** |\\n\\n>**10. Weird y-ticks.** \\n\\nThanks for this comment. The $2^1$ is due to the log scale on y-ticks. We fixed it in our manuscript to the normal scale.\"}", "{\"title\": \"Kindly reminder of the end of rebuttal\", \"comment\": \"Dear Reviewer jUT5,\\n\\nAs the discussion period is coming to an end tomorrow, we kindly ask you to review our response to your comments and let us know if you have any further queries. Alternatively, if you think we addressed your concerns properly and could raise the rating of the paper, we would be extremely grateful. We eagerly anticipate your response and are committed to addressing any remaining concerns before the discussion period concludes.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer 8QMc\", \"comment\": \"We acknowledge and commend the reviewer's well-structured feedbacks for further improving our manuscript. We commit to incorporate those reasonable and constructive comments in the next version of our manuscript after the review period. While some of the comments are objective and reasonable, we do want to respond to a few points that are unfair and self-contradictory:\\n\\n> 1. Problem Tackled \\n> However, across the tasks and results presented, LC-PLM does not demonstrate parity with or superiority over ESM-2. \\n\\n\\nThroughout the experiments presented in our manuscript, we benchmark LC-PLM against ESM-2 we pretrained with the exact same amount of tokens to demonstrate clear performance advantage of LC-PLM over ESM-2. This comparison is intended to **rigorous** control for the variabilities in the quality and quantity of pretraining dataset, such that the observed performance differences can be **unbiasedly** attributed to architectural advantages. On the other hand, it would be unfair to compare models pretrained with different datasets to draw conclusion about the architectural superiority.\\n\\n> 2. Motivation \\n> PPI graph in my opinion is not a persuasive case. \\n\\n\\nIt would be extremely helpful for the reviewer to articulate the reason why PPI graph is not a persuasive case for long-context capability, or perhaps suggest a more persuasive case for proteins' long-context use case/applications. From a systems biology perspective, PPI graphs capture functional contexts for individual proteins to help us understand how proteins function in cells/biological systems. \\n\\n> 3. Support for Claims \\n> Retraining with limited training tokens and applying graph augmentation, which appear to disproportionately affect ESM performance, making the comparison feel unfair. \\n\\n\\nAs mentioned in the response to the reviewer's point 1, the purpose of comparing LC-PLM and ESM2 with the same amount of training tokens is to control for the variabilities in pretraining data. The graph augmentation training for both LC-PLM and ESM2 were also performed under the exact same training data and procedure. The observed advantage of LC-PLM over ESM2 in controlled setting helps us tease out the confounding effects from differences in pretraining data. It would be helpful for the reviewer to clarify why such comparison \\\"_feel_\\\" unfair. We also respectively suggest the reviewer to objectively assess scientific works rather than using subjective views. \\n\\n\\n> 4. Significance. \\n> However, to establish significance, the authors could either propose new learning algorithms to address foundational protein modeling challenges or demonstrate LC-PLM\\u2019s advantages in specific yet biologically meaningful scenarios where ESM-2 underperforms.\\n\\nIn our manuscript, we demonstrate LC-PLM's advantage over ESM-2 on graph augmentation scenario, which is biological meaningful as PPI graphs, which can be generalized to biological knowledge graphs, are valuable tools to study how proteins function in actual biological systems rather than in isolation.\"}", "{\"title\": \"Response to Reviewer jUT5 [5/n]\", \"comment\": \"| Models | PPI Graph | Jacobian Contact Map | Remote Homology | Secondary Structure | Stability (spearman rho) | Fluorescence (spearman rho) |\\n|-----------------------|-----------------|-----------------------|---------------------|---------------------|---------------------------|-----------------------------|\\n| ESM-2-650M (100B) | None | 44.05 | 26.57 \\u00b1 0.49 | 79.86 \\u00b1 0.09 | 0.763 \\u00b1 0.008 | 0.695 \\u00b1 0.002 |\\n| ESM-2-G-650M (100B) | ogbn-proteins | 32.35 | 25.60 \\u00b1 0.77 | 79.76 \\u00b1 0.24 | 0.750 \\u00b1 0.016 | 0.694 \\u00b1 0.002 |\\n| ESM-2-G-650M (100B) | ogbl-ppa | 26.66 | 27.18 \\u00b1 0.63 | 79.91 \\u00b1 0.24 | 0.753 \\u00b1 0.009 | 0.693 \\u00b1 0.001 |\\n| LC-PLM-790M (100B) | None | 47.1 | 35.14 \\u00b1 1.69 | **85.07 \\u00b1 0.03** | 0.794 \\u00b1 0.003 | 0.692 \\u00b1 0.002 |\\n| LC-PLM-G-790M (100B) | ogbn-proteins | 47.15 | **35.74 \\u00b1 0.93** | 85.02 \\u00b1 0.11 | **0.801 \\u00b1 0.001** | **0.709 \\u00b1 0.003** |\\n| LC-PLM-G-790M (100B) | ogbl-ppa | **47.23** | 35.60 \\u00b1 1.45 | 85.01 \\u00b1 0.03 | **0.801 \\u00b1 0.001** | 0.693 \\u00b1 0.002 |\\n| ProtMamba-public | None | 10.96 | 17.82 \\u00b1 1.85 | 68.43 \\u00b1 0.06 | 0.726 \\u00b1 0.012 | 0.688 \\u00b1 0.005 |\\n| CARP-640M-public | None | 25.83 | 28.0 \\u00b1 0.8 | 83.0 \\u00b1 0.1 | 0.72 \\u00b1 0.01 | 0.68 \\u00b1 0.002 |\\n| ESM2-650M-public (1T, for reference only) | None | 66.85 | 33.43 \\u00b1 0.35 | 84.30 \\u00b1 0.15 | 0.804 \\u00b1 0.006 | 0.688 \\u00b1 0.001 |\\n\\n>**5. Can you clearly state which models have been trained in this work and for which models you just took pretrained versions?**\\n\\nWe clearly indicated where we retrained models / used publicly pretrained models in our manuscripts, e.g. lines 338-341, lines 366-368, lines 433-434, the footnote in Table 2, etc. In summary, all ESM-2(-G) and LC-PLM(-G) models with 100B tokens are retrained; all models with \\u201c-public\\u201d postfix are taken from public pretrained checkpoints. We also provide training and dataset details in Appendices D and E.\"}", "{\"title\": \"Response to Reviewer sY9s [1/n]\", \"comment\": \"We thank the reviewer for their valuable feedback on our paper. We performed multiple novel ablations and comparisons that we report in the main comment of the rebuttal. We also address the reviewer\\u2019s specific comments and concerns below:\\n\\n>**1. Little novelty and lack of comparison.** \\n\\nThank you for this feedback. While we appreciate the recognition of our work\\u2019s relation to previous studies, we would like to elaborate on what we wanted to show from experimental comparisons. We also would like to clarify several key distinctions:\\n\\n- We have discussed the key differences between our work and ProtMamba [[Sgarbossa et al.](https://www.biorxiv.org/content/10.1101/2024.05.24.595730v1)] in lines 152-153 and 158-161 and Table 1. To summarize, **ProtMamba trained on concatenated homologous protein sequences with autoregressive causal language modeling and an infilling objective focusing on protein sequence generation**. However, our LC-PLM focuses on learning a foundation-level long context protein language model (pLM) that can provide universal amino acid level protein representations for extremely long protein sequences; protein complexes, multimers, and heterodimers with encoded protein interaction context information. We also note that **LC-PLM is a much better foundation pLM** than the previous open-sourced SOTA foundation pLMs (i.e. ESM-2 [[Lin et al.](https://www.biorxiv.org/content/10.1101/2022.07.20.500902v1.full.pdf)] and CARP [[Yang et al.](https://www.biorxiv.org/content/10.1101/2022.05.19.492714v2.full.pdf)]). We additionally indicate that, unlike ProtMamba, which uses homologous sequences as context, the proposed LC-PLM-G showed a novel way to encode protein interaction graph information that contextualizes functionally related proteins rather than semantically similar ones (i.e. proteins with similar sequences) .\\n\\n- We thank the reviewer for suggesting a direct comparison with ProtMamba. We want to highlight again that our work aims to build a better protein foundation model instead of a task-specific protein model. We provide new experimental results comparing LC-PLM to ProtMamba. In these new experiments, we evaluate ProtMamba on the downstream tasks we used in our paper. **The performance of ProtMamba is much lower than LC-PLM (even CARP and ESM-2 as shown in general response) across all tasks**. This suggests that ProtMamba pretrained with concatenated homologous protein sequences potentially leads to degraded representations of individual proteins. After all, ProtMamba is trained to use homologous sequences as context for protein generation tasks (e.g., infilling and fitness prediction), rather than producing useful representations for proteins. Also it is worth noting that **ProtMamba cannot extrapolate to sequence > 2048** since they use positional encodings with fixed length in training. We summarized the results in the tables below. We added these results to our manuscript. Also as a side note, ProtMamba (https://openreview.net/forum?id=BMfHO2lXGe) is a concurrent submission to ICLR 2025 so other submissions can be excused for not comparing against them according to ICLR\\u2019s policy (https://iclr.cc/Conferences/2025/FAQ).\"}", "{\"title\": \"Discussion [4/n]\", \"comment\": \"> 5. Why don't people just use multi-modal models like ESM-3? Why do we still work on pure pLM?\", \"we_elaborate_the_reasons_as_follow\": [\"We kindly note that **ESM-3 is also a pLM** but with training data tokenized from multiple sources including protein sequences, structures, and functions. So if we can build up a better pLM, we can also make a better (\\\"multi-modal\\\") version of LC-PLM in the future with the same training data, which can outperform ESM-3.\", \"Also, note that ESM-3 still suffers the same intrinsic issue that it has weak long-context capability of modeling protein complexes, heterodimers, interaction contexts, and etc. Therefore, if our proposed LC-PLM can alleviate this weakness, we will have a better model to adapt to these mentioned scenarios/tasks.\", \"Developing a better base model is orthogonal to training a multi-modal model. For example, the researchers are still putting efforts on developing new language models that have other features like highly compute-efficient, capture long-context, etc. Why don't they just use GPT-4o and stop researching on new LMs? Our work follows the same motivation and insights.\", \"Lastly, we kindly ask the reviewer to **carefully read the related literature in this domain and look into our response and manuscript** before making arguments. Thank you.\"]}", "{\"title\": \"Kindly reminder of the end of rebuttal\", \"comment\": \"Dear Reviewer Z2Tu,\\n\\nAs the discussion period is coming to an end tomorrow, we kindly ask you to review our response to your comments and let us know if you have any further queries. Alternatively, if you think we addressed your concerns properly and could raise the rating of the paper, we would be extremely grateful. We eagerly anticipate your response and are committed to addressing any remaining concerns before the discussion period concludes.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"The paper presents LC-PLM, a protein language model based on a bidirectional selective structured state-space models with shared projection layers (BiMamba-S) for learning protein representations. Additionally, it introduces a graph-based variant, LC-PLM-G, incorporating protein-protein interaction (PPI) graphs for training and show learning paradigm with extended context beyond individual proteins. The authors claim that (1) LC-PLM demonstrates better length extrapolation capabilities and outperforms Transformer-based models like ESM-2 on various essential protein-related tasks; (2) LC-PLM-G shows promise in protein structure and function prediction tasks by integrating biological interaction contexts.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper presents a range of experimental evaluations across downstream tasks, showing better performance compared to one standard Transformer-based protein language model ESM-2. The evaluation tasks are diverse and comprehensive.\", \"The introduction of LC-PLM-G, incorporating PPI graphs, is a novel direction for protein language models. Using relational information between proteins has potential for realistic applications where protein interactions play a crucial role.\"], \"weaknesses\": [\"The paper does not adequately justify the selective structured state-space (S4) models over existing Transformers-based PLMs (eg. ESM-2), which are widely adopted for protein sequence modeling. Protein sequence understanding does not usually face the same latency/bandwidth constraints as real-time LLM deployments such as ChatGPT, where SSMs could possibly bring more practical value with linear inference time. Without clear advantages or innovations in the application of SSMs, this choice lacks strong motivation to stand convincing.\", \"The paper assumes that longer context modeling is inherently beneficial for protein sequences. However, the biological need for extremely long contexts remains unclear in many protein-related applications, especially since functional motifs often reside within local regions of a protein rather than requiring entire sequence context. Without a clear biological or empirical rationale, the assumption that long contexts are \\u201cessential\\u201d for protein understanding is questionable to me.\", \"The authors highlight LC-PLM\\u2019s computational efficiency over Transformers due to the quadratic complexity of Transformers. However, given that training/inference-efficient Transformers exist (flash attention for example), the emphasis on efficiency without practical demand diminishes the paper\\u2019s significance.\", \"Across the evaluation tasks (yes i agree they are many), only ESM-2 is there for comparison while for many of the task ESM-2 (alone) is not the real state-of-the-art (SOTA). Note that there are many interesting works built based on the ESM-2 embeddings and give very promising SOTA results for function prediction or mutation effect modeling. I wonder how the LC-PLM embedding accommodate these existing model and perform better than ESM-2.\"], \"questions\": [\"In RQ1, how do the authors hold out the test 250k sequence? Do the authors re-train the ESM2 for plotting the Figure 4? Moreover, in Figure 4/5, what is in specific the \\u201cEvaluation Loss\\u201d for the label of the y-axis (vertical direction)?\", \"When reporting folding evaluation, how do the structures obtained from ESM-2? My concern is, for evaluation fairness, one should align the folding trunk while on the other hand, the distribution over the residue-level embeddings from (1) LC-PLM and (2) ESM-2 can be different in the vector space. Please elaborate or point out some relevant context to address this question.\", \"A related questions from above, i understand the folding (RQ4) task for the authors is just a \\u201cproof-of-concept\\u201d such that the authors adopt downsampling for the openfold training data. However, 1.5% of protein single chains can have large variance for the model to show enough performance bias. Could the authors test the robustness of this downsampling strategy and show error bar on it?\", \"Question regarding the results incorporating the protein-protein interaction PPI graph saying LC-PLM-G. In specific, in table 3 of the two tasks from TAPE benchmark, the LC-PLM-G/ESM-2-G shows very similar results compared to their vanilla version (LC-PLM/ESM-2). The performance improvement margin seems to be within the reported std range and I do not see significant improvement of doing this; in table 4/ProteinGym benchmark, for ESM-2-G, the incorporating of PPI graph drastically hurt the performance on top of ESM-2 while the performance gap between LC-PLM-G/LC-PLM is also hard to tell. These results can weaken the \\u201cnecessity\\u201d claim of the main motivation of this paper: modeling based on multiple sequence input (PPI in this case). Could the author further justify the benefit for inference with additional PPI graph?\", \"Could the authors mentioned some concrete modeling examples with biological significance that we have to turn to Mamba because the attention/Transformer sucks or even fails to handle the problem at all? Can the authors provide further evidence that the extended context window length improves the \\u201crelated\\u201d downstream protein tasks beyond the results presented? At present, it is still unclear to me why should I buy the idea of using S4/mamba instead of attention?\"], \"misc\": [\"In figure 6, why using \\u201cvalue x 2^1\\u201d as the ticks? That looks a bit weird\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Z2Tu [2/n]\", \"comment\": \">**2. Evaluation of pLMs on the other 3 TAPE tasks: Contact Prediction, Fluorescence and Stability.**\\n\\nWe appreciate this suggestion and performed evaluations for all the models considered in this manuscript (LC-PLM, LC-PLM-G, ESM-2, ESM-2-G, CARP, ProtMamba). The results are shown in Table A and added to the revised version of our paper. It is worth noting that we adopt a different setting for the Contact Map prediction task for a fair comparison of attention-free models including CARP, LC-PLM, and ProtMamba. Instead of using the attention maps from Transformer-based pLMs to predict the contact maps [[Rao et al. 2020](https://www.biorxiv.org/content/10.1101/2020.12.15.422761v1)], we followed [[Zhang et al. (2024)](https://www.biorxiv.org/content/10.1101/2024.01.30.577970v1)] to use categorical Jacobian matrices computed from protein LMs as the zero-shot prediction for protein contact maps and report the precision@2/L (L is the length of a protein sequence) on the validation set of ProteinNet dataset [[AlQuraishi 2019](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-2932-0)]. In Contact map and Stability prediction tasks, we noted a similar trend where LC-PLM outperforms ESM-2 with the same number of pretraining tokens, and that 2nd stage graph pretraining slight improve LC-PLM while hurting the performance of ESM-2, highlighting the difficulty of adopting ESM-2 to handle longer contexts. For Fluorescence tasks, we noted all models saturated at Spearman correlation coefficient around 0.69. This observation was also made by others [[Rao et al. 2019](https://www.biorxiv.org/content/10.1101/676825v1); [McDermott et al. 2023](https://www.nature.com/articles/s42256-023-00647-z); [Schmirler et al. 2024](https://www.nature.com/articles/s41467-024-51844-2)].\\n\\n| Models | PPI Graph | Jacobian Contact Map | Stability | Fluorescence |\\n|-------------------------|------------------|-----------------------|--------------------|--------------------|\\n| ESM-2-650M (100B) | None | 44.05 | 0.763 \\u00b1 0.008 | 0.695 \\u00b1 0.002 |\\n| ESM-2-G-650M (100B) | ogbn-proteins | 32.35 | 0.750 \\u00b1 0.016 | 0.694 \\u00b1 0.002 |\\n| ESM-2-G-650M (100B) | ogbl-ppa | 26.66 | 0.753 \\u00b1 0.009 | 0.693 \\u00b1 0.001 |\\n| LC-PLM-790M (100B) | None | 47.10 | 0.794 \\u00b1 0.003 | 0.692 \\u00b1 0.002 |\\n| LC-PLM-G-790M (100B) | ogbn-proteins | 47.15 | **0.801 \\u00b1 0.001** | **0.709 \\u00b1 0.003** |\\n| LC-PLM-G-790M (100B) | ogbl-ppa | **47.23** | **0.801 \\u00b1 0.001** | 0.693 \\u00b1 0.002 |\\n| ProtMamba | None | 10.96 | 0.726 \\u00b1 0.012 | 0.688 \\u00b1 0.005 |\\n| CARP-640M-public | None | 25.83 | 0.720 \\u00b1 0.010 | 0.680 \\u00b1 0.002 |\\n| ESM-2-650M-public | None | 66.85 | 0.804 \\u00b1 0.006 | 0.688 \\u00b1 0.001 |\"}", "{\"summary\": \"The authors propose a method for masked-modeling of protein sequences with large context sizes.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"Generally important topic of modeling proteins\", \"Diverse, extensive and involved experiments done\", \"Good presentation of the material\"], \"weaknesses\": \"- Originality: This method already exists as ProtMamba [1]. ProtMamba also uses Mamba for protein modeling and introduces long context tasks. The authors should propose a new method to model protein sequences and long-contexts.\\n\\n- Significance: \\na) The significance of this work is also diminished by many false and tenuous claims. E.g. \\\"Such protein LMs cannot extrapolate to longer proteins [..]\\\": this is not true, and cannot and has not been shown theoretically nor empirically. Transformer-based pLMs can extrapolate to longer contexts and can also handle relatively long contexts (e.g. 16k context size). Also \\\"they fail to account for the underlying biological mechanisms [...]\\\" is just a tenuous claim without any evidence. The claim that\\\"our work [...] learns universal AA token level presentation\\\" also bears any evidence and any definition what \\\"universal\\\" means. The authors should remove all tenuous and false claim (not limited to the ones I mentioned) from the manuscript.\\n\\nb) The approach with graph-contextual modeling is ad-hoc. It is unclear why proteins from a PPI graph should be relevant contexts for a particular protein at hand. The homology-criterion that is used by [1] is much better justified and the default approach to assemble long-contexts.\\n\\n- Technical errors: \\nComparisons with ProtMamba and Transformer based architectures are missing. It seems that the authors have only trained their own architecture in this work and not a single model based on another architecture. The authors should at least compare with ProtMamba, and with 1-2 Transformer-based architectures. Therefore, it also remains unclear from where the alleged performance gains arise: the increased context, the graph-style pre-trained, the architecture, or any other component.\", \"other_flaws\": \"\", \"table_1_and_table_3\": \"one digit too much is displayed.\", \"references\": \"[1] Sgarbossa, D., Malbranke, C., & Bitbol, A. F. (2024). ProtMamba: a homology-aware but alignment-free protein state space model. bioRxiv, 2024-05.\", \"questions\": [\"Can you clearly state which models have been trained in this work and for which models you just took pretrained versions?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The reviewers raised major concerns on this paper and most of the issues are not resolved during discussions.\", \"additional_comments_on_reviewer_discussion\": \"There have been extensive discussions, but most of the reviewers are not convinced.\"}", "{\"title\": \"General response to all reviewers [1/2]\", \"comment\": \"We sincerely thank all the reviewers for their constructive feedback. Here, we report a brief summary of common questions across multiple reviewers and results from our new experiments to address them.\\n\\n> **1. Motivation of long-context pLMs**\", \"the_biological_motivations_for_long_context_modeling_of_proteins_can_be_summarized_to_three_aspects\": [\"Functional: many proteins function as part of multi-protein complexes (e.g. transcription factors) and physically or functionally interact with other proteins and molecules. The interaction information is often captured in protein-protein interaction graphs. Knowing the interacting partners of an individual protein is helpful in predicting the protein\\u2019s properties.\", \"Structural: protein structure depends on global fold, which can involve residues and interactions across long distances and across multiple protein sequences. Modeling multi-protein systems captures distant dependencies critical for stability and function. Folding of multi-meric protein complexes relies on models capable of handling long contexts.\", \"Evolutionary: Proteins in the same pathway or family exhibit co-evolutionary patterns. Multiple sequence alignment (MSA) of homologous protein sequences is a common approach to increase the contexts for studying individual proteins. As other reviewers noted, ProtMamba [[Sgarbossa et al.](https://www.biorxiv.org/content/10.1101/2024.05.24.595730v1)] is inspired by leveraging MSA as an individual protein\\u2019s context.\", \"> **2. Comparison with ProtMamba**\", \"We performed additional evaluation experiments to comprehensively compare the performances of our models with ProtMamba. The new results are summarized in Table A.\"], \"table_a\": \"Evaluation of pLMs on downstream tasks. We report the top-1 accuracy for the Remote Homology fold-level test set; accuracy for the 3-class secondary structure prediction on the CB513 test set; Spearman\\u2019s correlation coefficients for the test sets for the Stability and Fluorescence prediction tasks. For the Jacobian contact map prediction task, we adopted the methods from [Zhang et al. (2024)](https://www.biorxiv.org/content/10.1101/2024.01.30.577970v1) to use categorical Jacobian matrices computed from protein language models as the zero-shot prediction for protein contact maps and report the precision@2/L (L is the length of a protein sequence) on the validation set of ProteinNet dataset [[AlQuraishi 2019](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-2932-0)]. We report the TM scores for the LMFold structure prediction task.\\n\\n| Tasks/Models | ESM-2-650M (100B) | LC-PLM-790M (100B) | ProtMamba |\\n|-----------------------------|---------------------|-----------------------|------------------------|\\n| **TAPE** | | | |\\n| Jacobian Contact Map | 44.05 | **47.1** | 10.96 |\\n| Remote Homology | 26.57 \\u00b1 0.49 | **35.14 \\u00b1 1.69** | 17.82 \\u00b1 1.85 |\\n| Secondary Structure | 79.86 \\u00b1 0.09 | **85.07 \\u00b1 0.03** | 68.43 \\u00b1 0.06 |\\n| Stability | 0.763 \\u00b1 0.008 | **0.794 \\u00b1 0.003** | 0.726 \\u00b1 0.012 |\\n| Fluorescence | **0.695 \\u00b1 0.002** | 0.692 \\u00b1 0.002 | 0.688 \\u00b1 0.005 |\\n| **LMFold structure prediction** | | | |\\n| CASP15-multimers | 0.4228 \\u00b1 0.0065 | **0.5109 \\u00b1 0.0070** | N/A * |\\n| CASP14 | 0.3531 \\u00b1 0.0076 | **0.4154 \\u00b1 0.0080** | 0.3288 \\u00b1 0.0091 |\\n| Benchmark2 | 0.4859 \\u00b1 0.0119 | **0.6290 \\u00b1 0.0071** | 0.4515 \\u00b1 0.0062 |\\n\\n\\nWe found that our LC-PLM outperforms ProtMamba across all but one task by large margins. These results suggest that ProtMamba pretrained with concatenated homologous protein sequences potentially leads to degraded representations of individual protein sequences. Also it is worth noting that **ProtMamba cannot extrapolate to sequence > 2048 in CASP15-multimer benchmark** since they use positional encodings with fixed length in training. We added these results to our manuscript.\"}", "{\"title\": \"Response to Reviewer sY9s [4/n]\", \"comment\": \">**2.Why don't we simply train a GNN?**\\n\\n- We want to note that, by training with graph context, we are not aiming to only perform graph tasks (e.g. node prediction, link prediction, etc.), but also want to show that **a good representation with encoded protein interaction context information helps many other downstream tasks (shown in Tables 3, 4, and 12) like remote homology prediction**. We also trained GNNs on other graph-related tasks like ogbn-proteins and ogbl-ppa and conducted ablation studies with and without our learned protein embeddings, as shown in Tables 16 and 17. We show that with better protein representations and improved graph context-aware representations, the performance can be further boosted.\\n\\n>**3.Wrong direction in the figure.**\\n\\n- Thanks for the feedback and pointing this out. We have fixed it in our paper.\"}", "{\"title\": \"Response to Reviewer jUT5 [4/n]\", \"comment\": \"- *Also \\\"they fail to account for the underlying biological mechanisms [...]\\\" is just a tenuous claim without any evidence.*\\n\\nWe provide much evidence throughout the paper in Tables 2, 3, 4 to show transformer-based SOTA pLM (i.e. ESM-2) fails to perform well when the task requires knowledge and information about biomolecular interactions and dynamics. For example, in Table 2, we show that ESM-2 performs much worse than LC-PLM on all folding benchmarks, especially on protein complexes, which need protein interaction information to better infer the structures; in Table 3, we show that ESM-2 also performs much worse on TAPE tasks, i.e. remote homology prediction and secondary structure prediction; in Table 4, we show that ESM-2 is also much worse on ProteinGym benchmark. We think there is sufficient evidence to demonstrate the suboptimality of transformer-based SOTA pLM (ESM-2) in capturing biomolecular interactions and dynamics and accounting for such underlying biological mechanisms.\\n\\n- *The claim that\\\"our work [...] learns universal AA token level presentation\\\" also bears any evidence and any definition what \\\"universal\\\" means.*\\n\\nWe use the term \\u201cuniversal\\u201d since (1) this term has been widely-used in protein representation learning literature [[Alley et al.](https://pubmed.ncbi.nlm.nih.gov/31636460/), [Detlefsen et al.](https://www.nature.com/articles/s41467-022-29443-w)], where researchers define pLM is *\\u201c[learning universal, cross-family representations of protein space](https://www.nature.com/articles/s41467-022-29443-w)\\u201d*; and (2) we pretrain our models on the **Universal** Protein Reference Clusters (UniRef) dataset which contains universal protein sequence resources. Given such learned high-quality protein embeddings, we can achieve decent performance across a variety of downstream tasks, which demonstrates the universality of such protein representations.\\n\\n>**3. Why don\\u2019t we use homology-criterions like in ProtMamba?**\\n\\nThank you for this question. We don\\u2019t consider homology-criterion because the context contained in homology representations, such as multiple sequence alignment (MSA) used for pretraining ProtMamba, are already contained in the sequences of individual proteins. After all, MSAs can be considered as performing clustering on a set of protein sequences. As such, we think homology does not bring additional information beyond protein sequences. On the other hand, we consider biological graphs such as PPI graphs, which contain functional interactions between protein sequences derived from additional sources such as annotations and wet lab biological experiments.\\n\\n>**4. Train foundation pLMs with other Transformer architectures for comparison.**\\n\\nIn our scaling law, length extrapolation, structure prediction, TAPE benchmark, and ProteinGym experiments, we pretrained our own Transformers/ESM-2 on the same dataset with the same number of tokens across different sizes from scratch as baseline comparisons. We clearly indicated where we retrained models / used publicly pretrained models in our manuscripts, e.g. lines 338-341, lines 366-368, lines 433-434, the footnote in Table 2, etc.\\n\\nWe want to note that our main goal is **building up a foundation-level long context protein language model (pLM)** that can provide universal amino acid level protein representations for extremely long protein sequences; protein complexes, multimers, and heterodimers with encoded protein interaction context information, not debating on selecting/tailoring a specific architecture, just like ESM-2 (there exist many other decent linear RNNs/Transformers architectures in the community but trying out all of them is beyond the scope of our paper). We would like to **demonstrate that LC-PLM is a much better foundation pLM than the previous open-sourced SOTA foundation pLMs (i.e. ESM-2 and CARP)**. We compare our LC-PLM to them in Table 3. We provide more comparisons in the Table below.\"}", "{\"title\": \"Kindly reminder of the end of rebuttal\", \"comment\": \"Dear Reviewer sY9s,\\n\\nAs the discussion period is coming to an end tomorrow, we kindly ask you to review our response to your comments and let us know if you have any further queries. Alternatively, if you think we addressed your concerns properly and could raise the rating of the paper, we would be extremely grateful. We eagerly anticipate your response and are committed to addressing any remaining concerns before the discussion period concludes.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer Z2Tu\", \"comment\": \"Thank you for clarifying your perspective on confidence as a meta-value. That makes total sense to us.\\n\\nAgain, we greatly appreciate the time and effort you\\u2019ve invested in improving our work and your support of our submission. If any remaining aspects of the paper could benefit from further clarification, we would be happy to address them.\"}", "{\"title\": \"Discussion [3/n]\", \"comment\": [\"Comparing ESM-2 (Transformer-based) and LC-PLM (Mamba-based), we have more discussion and evidence to show that the long-context capability is essential for many downstream tasks.\", \"LC-PLM has favorable adaptability to long-context tuning: We performed three more downstream protein tasks to evaluate ESM2 and LC-PLM models. In our existing and new experiments (compiled in Table A), after performing the 2nd stage graph training, the performance of ESM2 (Transformers) degrades on many downstream tasks including protein fitness prediction in ProteinGym (Table 4, Spearman\\u2019s rho drop from 0.295 to 0.109), Contact map prediction (New table below, precision drops from 44.1 to 26.7), and TAPE stability prediction (New table below, Spearman\\u2019s rho drop from 0.763 to 0.750). On the other hand, our LC-PLM models maintain or slightly improve their performances on these tasks after the 2nd stage graph training. These results suggest it is difficult to tune Transformer models to adapt to longer contexts.\", \"Mamba-based models enjoy favorable inference efficiency in terms of both time and space complexity. This will satisfy the practical demands for in-silico protein design, where one needs to screen $10^6$ sequences using pLM based methods. The constant time complexity of Mamba/SSMs could be an advantage in accelerating this phase. There is also a GPU memory constraint in performing inference with the Transformer/ESM2 model on long protein sequences users of the ESM2 model have been facing [[issue1](https://github.com/facebookresearch/esm/issues/21), [issue2](https://github.com/facebookresearch/esm/issues/49)].\"], \"table_a\": \"Evaluation of pLMs before and after 2nd stage graph context training on downstream tasks. For the Jacobian contact map prediction task, we adopted the methods from [[Zhang et al.](https://www.biorxiv.org/content/10.1101/2024.01.30.577970v1)] to use categorical Jacobian matrices computed from protein language models as the zero-shot prediction for protein contact maps and report the precision@2/L (L is the length of a protein sequence) on the validation set of ProteinNet dataset [[AlQuraish](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-2932-0)]. We report the Spearman\\u2019s correlation coefficients for the test sets for the TAPE Stability prediction tasks and ProteinGym DMS substitutions benchmarks.\\n\\n| Models | PPI Graph | Jacobian Contact Map | TAPE Stability | ProteinGym DMS substitutions |\\n|-----------------------|-----------------|-----------------------|--------------------|-----------------------------|\\n| ESM-2-650M (100B) | None | 44.05 | 0.763 \\u00b1 0.008 | 0.295 \\u00b1 0.013 |\\n| ESM-2-G-650M (100B) | ogbn-proteins | 32.35 | 0.750 \\u00b1 0.016 | 0.109 \\u00b1 0.013 |\\n| ESM-2-G-650M (100B) | ogbl-ppa | 26.66 | 0.753 \\u00b1 0.009 | 0.131 \\u00b1 0.014 |\\n| LC-PLM-790M (100B) | None | 47.1 | 0.794 \\u00b1 0.003 | 0.378 \\u00b1 0.008 |\\n| LC-PLM-G-790M (100B) | ogbn-proteins | 47.15 | **0.801 \\u00b1 0.001** | **0.380 \\u00b1 0.008** |\\n| LC-PLM-G-790M (100B) | ogbl-ppa | **47.23** | **0.801 \\u00b1 0.001** | **0.380 \\u00b1 0.008** |\"}", "{\"title\": \"Response to Reviewer Z2Tu [3/n]\", \"comment\": \">**3. Criterions for Table 1 should be better specified (e.g., when is a method considered to be universal?)**\\n\\nWe define universality as learning representations are learned to reflect general properties of all proteins, rather than a specific protein family or properties such as post-translational modifications. pLMs satisfying universality can be used as a foundation model to develop specialized methods such as PTM prediction and structure prediction. We added these clarifications to our manuscript. \\n\\nWe use the term \\u201cuniversal\\u201d since (1) this term has been widely-used in protein representation learning literature [[Alley et al.](https://pubmed.ncbi.nlm.nih.gov/31636460/), [Detlefsen et al.](https://www.nature.com/articles/s41467-022-29443-w)], where researchers define pLM is *\\u201c[learning universal, cross-family representations of protein space](https://www.nature.com/articles/s41467-022-29443-w)\\u201d*; and (2) we pretrain our models on the **Universal** Protein Reference Clusters (UniRef) dataset which contains universal protein sequence resources. Given such learned high-quality protein embeddings, we can achieve decent performance across a variety of downstream tasks, which demonstrates the universality of such protein representations.\\n\\n>**4. line 101: here also \\\"Hochreiter, S. (1991). Untersuchungen zu dynamischen neuronalen Netzen\\\" should be cited.**\\n\\nThanks for pointing this out, we added this citation to this paper for their contribution to RNNs. \\n\\n>**5. Which context size do you use for ESM-2? Could it have been increased for the experiments you carried out (i.e., did ESM2 and LC-PLM use approximately the same memory?)?**\\n\\nWe use 1024 as the context size for the ESM-2 models we trained, which is the same context size used in the public ESM2 models. The sizes remain constant as the sizes of the positional embedding matrix can\\u2019t be changed. \\n\\n>**6. What hyperparameters did you use for ESM-2? How was hyperparameter selection done for ESM-2 and LC-PLM(-G)?**\\n\\nWe adopt the hyperparameter from the official ESM-2 paper [Lin et al. 2023] with some modifications: global batch size=0.5M tokens; peak learning rate = 2e-4, 2000 steps learning rate warm-up, followed by cosine decay schedule; weight decay=0.01; Adam beta1=0.9, beta2=0.98, eps=1e-8; We added those details in our Appendix E.\\n\\n>**7. RQ3: In contrast to some other experiments UniRef50 is used for training. Evaluation is at UniRef90. Why is this the case?**\\n\\nThanks for this question! We and others [[Rives et al. 2019](https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://www.pnas.org/doi/10.1073/pnas.2016239118&ved=2ahUKEwiX1Mnu1PCJAxVMEGIAHR7mGQYQFnoECA4QAQ&usg=AOvVaw0GxY6R8v3ee7mp_QSywY2N)] found pretraining with UniRef50 leads to lower perplexity with the same training budget due to the lower-level of similar sequences in UniRef50. However, UniRef90 datasets have sizable proteins across all length bins, which is suitable for the length extrapolation evaluation experiments. \\n\\n>**8. line 462-464: \\\"As shown in Figure 7, the embeddings from LC-PLM-G captures the graph topology much better than LC-PLM, which aligns with the community detection results. What is the criterion to see that LC-PLM-G captures the graph topology much better?**\\n\\nWe elaborate more on the criterion here. We first run community detection on the graph and label the nodes using its corresponding community labels. Since community detection helps us find local clusters in the graph, this labeling method provides enough information of how the topological structure presented in the graph. Thus, we do t-SNE dimensionality reduction on the learned protein embeddings and visualize the 2-D embeddings in the plot. If closer 2-D embeddings reveal the similar community labels, then our proposed graph context training can be demonstrated as capturing graph relational information well.\\n\\n>**9. line 439-440: \\\"This also suggests that, even for average-length protein sequences, long-range dependencies would be useful information and an important feature for protein structure prediction. What is the exact indication to come to this conclusion?\\\"**\\n\\nThis conclusion was made based on the performance improvement of LC-PLM on the CASP14 dataset (Table 2). CASP14 contains 37 mostly average-length protein sequences (mean=318.6; median=197).\\n \\n>**10. line 533: \\\"LC-PLM achieved 7% to 34% better performance on various downstream tasks.\\\" Which task do you exactly mean with 7% and which with 34%?**\\n\\nWe found LC-PLM achieved 7% and 34% improvement over ESM-2 on TAPE Secondary Structure (85.07% vs 79.85% in accuracy), and Remote Homology (35.14% vs 26.57% in top-1 accuracy), respectively.\\n\\nLastly, we would like to sincerely ask you whether you are willing to increase your confidence score if you think we addressed your comments properly. Thank you so much for your support again!\"}", "{\"summary\": \"The authors present a protein model which is based on the Mamba backbone which runs in bidirectional mode with parameter sharing. The authors compare their method to ESM-2 for the pre-training data (UniRef90) and for downstream structure prediction tasks. For protein protein interaction tasks, the authors present a graph-based version of their model.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The results indicate that the proposed model outperforms the ESM-2 baseline on both pretraining MLM objective and the downstream tasks.\", \"The authors perform experiments for a wide range of model sizes ranging from 130M parameters to 1400M parameters.\"], \"weaknesses\": [\"**Little novelty**:\", \"Protein modeling with Mamba based backbone architecture is done here [1]. Notably, in a sense [1] also allows for bidirectional information sharing by their fill-in-the-middle objective.\", \"Bidirectional Mamba blocks with parameter sharing was already done here [2].\", \"**Lack of comparison**:\", \"If the authors thought, their work is substantially different from [1], they have to compare to [1] and discuss benefits/differences.\", \"For all experiments, e.g. ProteinGym experiment (Table 4), SOTA methods and their performances should be reported along with ESM-2 and LC-PLM-*.\", \"Figure 2: Mistake wrt the orientation of the third linear projection block.\", \"[1] ProtMamba: a homology-aware but alignment-free protein state space model\", \"[2] Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling\"], \"questions\": [\"Wrt the PPI section: It seems a bit counterintuitive to force the graph-structure like information into tokenized text only to be able to naively apply a language model to it. Why shouldn't it be possible to simply train a GNN on the given data?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Z2Tu [1/n]\", \"comment\": \"We sincerely thank the reviewer for the favorable rating and constructive suggestions! To address the weakness and questions:\\n\\n>**1. Performance comparison with ProtMamba.**\\n\\nWe evaluated the performance of ProtMamba model against ours across seven tasks to find ProtMamba significantly underperforms our models in all 5 tasks we evaluated (structure prediction, Table 2; Contact map, Remote Homology, Secondary Structure, Stability, Table 3;). Our results demonstrate ProtMamba is not good at producing good representations for protein sequences. Its training regime favors protein generation and contextual fitness prediction, which may degrade the representation quality of individual protein sequences in the meantime. \\n\\nTable A. Protein structure prediction with LMFold. Structure prediction performance (TM score) are reported on different hold out datasets.\\n| Models | CASP15-multimers | CASP14 | Benchmark2 |\\n|-----------------------|-----------------------------------------------|--------------------------------------|------------------------------------|\\n| LC-PLM-790M (100B) | **0.5109 \\u00b1 0.0070** | **0.4154 \\u00b1 0.0080** | **0.6290 \\u00b1 0.0071** |\\n| ProtMamba-public | N/A (ProtMamba cannot run on protein sequences with > 2048 length) | 0.3288 \\u00b1 0.0091 | 0.4515 \\u00b1 0.0062 |\\n\\nTable B. Evaluation on TAPE tasks in supervised fine-tuning setting. We report the top-1 accuracy for the Remote Homology fold-level test set; accuracy for the 3-class secondary structure prediction on the CB513 test set; Spearman\\u2019s correlation coefficients for the test sets for the Stability and Fluorescence prediction tasks. For the Jacobian contact map prediction task, we adopted the methods from [[Zhang et al.](https://www.biorxiv.org/content/10.1101/2024.01.30.577970v1)] to use categorical Jacobian matrices computed from protein language models as the zero-shot prediction for protein contact maps and report the precision@2/L (L is the length of a protein sequence) on the validation set of ProteinNet dataset [[AlQuraishi](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-2932-0)].\\n| Models | PPI Graph | Jacobian Contact Map | Remote Homology | Secondary Structure | Stability | Fluorescence |\\n|-----------------------|-----------------|-----------------------|---------------------|---------------------|-----------------|-----------------|\\n| LC-PLM-790M (100B) | None | 47.1 | 35.14 \\u00b1 1.69 | **85.07 \\u00b1 0.03** | 0.794 \\u00b1 0.003 | 0.692 \\u00b1 0.002 |\\n| LC-PLM-G-790M (100B) | ogbn-proteins | 47.15 | **35.74 \\u00b1 0.93** | 85.02 \\u00b1 0.11 | **0.801 \\u00b1 0.001** | **0.709 \\u00b1 0.033** |\\n| LC-PLM-G-790M (100B) | ogbl-ppa | **47.23** | 35.60 \\u00b1 1.45 | 85.01 \\u00b1 0.03 | **0.801 \\u00b1 0.001** | 0.693 \\u00b1 0.002 |\\n| ProtMamba-public | None | 10.96 | 17.82 \\u00b1 1.85 | 68.43 \\u00b1 0.06 | 0.726 \\u00b1 0.012 | 0.688 \\u00b1 0.005 |\"}", "{\"title\": \"Response to Reviewer 8QMc [1/n]\", \"comment\": [\"We thank the reviewer for their valuable feedback on our paper. We performed multiple novel ablations and comparisons that we report in the main comment of the rebuttal. We also address the reviewer\\u2019s specific comments and concerns below:\", \">**1.Weak motivation in using SSMs: no latency constraints for pLMs.**\", \"We want to emphasize and restate our main motivation for using BiMamba-S as our architectural design choice in our work \\u2013 our LC-PLM focuses on **learning a long-context foundation pLM** that can provide universal amino acid level protein representations for extremely long protein sequences; protein complexes, multimers, and heterodimers with encoded protein interaction context information. SSMs grant the model (1) long-context capability, (2) length extrapolation capability, and (3) high-resolution data modeling capability to realize our goal. In contrast, Transformers have to be retrained with adjusted positional encodings to extrapolate over length [[Liu et al.](https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://arxiv.org/abs/2312.17044&ved=2ahUKEwi-iMyxxPCJAxW4kokEHQNYHhcQFnoECCEQAQ&usg=AOvVaw0fldeIoSBJQd5fNgnKjOAN)] or can not perform well on high-resolution data [[Wang et al.](https://arxiv.org/abs/2401.13660), [Schiff et al.](https://arxiv.org/abs/2403.03234)].\", \"Along with such motivations, we empirically demonstrate that LC-PLM has improved length extrapolation capabilities, favorable scaling laws, and achieved a 7% to 34% improvement on downstream tasks (e.g. protein structure prediction (CASP15-multimers, CASP14, Benchmark2), tasks in TAPE and ProteinGym) compared to ESM-2 [[Lin et al.](https://www.biorxiv.org/content/10.1101/2022.07.20.500902v1.full.pdf)] and CARP [[Yang et al.](https://www.biorxiv.org/content/10.1101/2022.05.19.492714v2.full.pdf)], especially for longer proteins and protein complexes; (2) to encode biological interaction information, we propose a novel second-stage training based on random walks over graphs to extend the long-context capabilities of LC-PLM to leverage the protein interaction graph context; (3) we demonstrate its effectiveness in capturing graph contextual information on remote homology detection (TAPE) in Table 3, protein function prediction (ogbn-proteins) in Table 14 and 15, and PPI link prediction (ogbl-ppa) in Table 16 and 17.\", \"In addition to our main motivations above, we argue that the inference efficiency is important in computational protein design/drug discovery applications: one usually generate up to $10^6$ protein sequences as candidates [[Adolf-Bryfogle et al.](https://pubmed.ncbi.nlm.nih.gov/29702641/)] and pLMs fine-tuned for scoring those designed protein sequences can be used for scoring, ranking, and filtering the designed protein sequences. The per-step constant time complexity of SSMs could be an advantage in accelerating this phase.\", \"> **2.Weak motivation in long-context modeling of proteins.**\", \"We thank the reviewer for this comment. We clarify the biological motivations and needs for long-context modeling of proteins into three perspectives: (1) functional, (2) structural, and (3) evolutionary. (1) many proteins function as part of multi-protein complexes (e.g. transcription factors) and physically or functionally interact with other proteins and molecules. The interaction information is often captured in protein-protein interaction graphs. Knowing the interacting partners of an individual protein is helpful in predicting the protein\\u2019s properties. We demonstrate this on protein function prediction tasks using the ogb-proteins graphs, as shown in Tables 14 and 16. With interaction information, the model shows performance gain. (2) Protein structure depends on global fold, which can involve residues and interactions across long distances and across multiple protein sequences. Modeling multi-protein systems captures distant dependencies critical for stability and function. Folding of multi-meric protein complexes relies on models capable of handling long contexts. We demonstrate this benefit in our LMFold experiments in Table 2. Our model outperforms ESM-2 across all folding benchmarks, especially on CASP15-multimers. (3) Proteins in the same pathway or family exhibit co-evolutionary patterns due to functional interdependencies. In fact, multiple sequence alignment (MSA) of homologous protein sequences is a common approach to increase the contexts for studying individual proteins. As other reviewers noted, ProtMamba [[Sgarbossa et al.](https://www.biorxiv.org/content/10.1101/2024.05.24.595730v1)] is inspired by leveraging MSA as an individual protein\\u2019s context.\"]}", "{\"title\": \"Further response by Reviewer 8QMc\", \"comment\": \"Thank you for addressing my prior comments. To wrap up my thoughts and inspire further improvements to the manuscript, I would like to propose the following points mainly concerning me.\\n\\n**1. Problem Tackled**\\n\\n- The manuscript appears to propose a new type of foundational protein language model (pLM), LC-PLM, intended to be parallel to existing models like the ESM series (eg. ESM-2). The proposed pLMs are expected to address biologically relevant problems more effectively than these baselines, especially ESM-2 (650M), as emphasized by the authors.\\n- However, across the tasks and results presented, LC-PLM does not demonstrate parity with or superiority over ESM-2. Foundational problems are ambitious and potentially impactful when tackled effectively, but also difficult. I suggest the authors focus on a specific problem where long-context modeling is essential and where ESM-2 underperforms. Demonstrating such a use case would significantly strengthen the contribution.\\n\\n**2. Motivation**\\n\\nThe motivation for integrating Mamba/SSM into a pLM requires clearer articulation. While the authors provided \\u201cthree points\\u201d ((1) functional, (2) structural, and (3) evolutionary) in the rebuttal, I remain unconvinced from both biological side and ML side.\\n\\n- The main selling point of Mamba in text/NLP domains is its long-context capability, which is acknowledged. However, its extension to the protein domain requires stronger and specific justification. PPI graph in my opinion is not a persuasive case.\\n- The transition from an ESM model to LC-PLM feels unmotivated for potential users. While the authors may have confidence in the framework, others still need a clear and compelling reason for this shift. I recommend reorganizing the introductory sections to restate the motivation and explicitly address the benefits of Mamba-based pLMs for proteins.\\n\\n**3. Support for Claims**\\n\\nThe experimental results presented fail to convincingly support the proposed architecture as an alternative for the specified tasks. Specific concerns include:\\n\\n- Retraining with limited training tokens and applying graph augmentation, which appear to disproportionately affect ESM performance, making the comparison feel unfair. \\n- The lack of scenarios where LC-PLM demonstrates clear advantages over the transformer-based ESM. Providing rigorous and unbiased experiments would strengthen the manuscript.\\n\\n**4. Significance**\\n\\nAlthough SotA performance is not a mandatory criterion for acceptance, the current manuscript lacks sufficient evidence of new knowledge or value for the community.\\n\\n- It is acknowledged that Mamba/SSM excels in long-context input, including protein sequences. However, to establish significance, the authors could either propose new learning algorithms to address foundational protein modeling challenges or demonstrate LC-PLM\\u2019s advantages in specific yet biologically meaningful scenarios where ESM-2 underperforms. \\n- Claiming LC-PLM superiority over transformer-based models without robust experimental support risks misleading the community and detracts from the potential impact of the work.\\n\\n\\nI maintain my original rating and do not recommend acceptance at this time. The manuscript requires stronger motivation, clearer contributions to the community (and optionally more compelling results) to reach its full potential. That being said, I think LC-PLM can potentially become a good work after proper and careful revision :)\"}" ] }
Essg9kb4yx
On Large Language Model Continual Unlearning
[ "Chongyang Gao", "Lixu Wang", "Kaize Ding", "Chenkai Weng", "Xiao Wang", "Qi Zhu" ]
While large language models have demonstrated impressive performance across various domains and tasks, their security issues have become increasingly severe. Machine unlearning has emerged as a representative approach for model safety and security by removing the influence of undesired data on the target model. However, these methods do not sufficiently consider that unlearning requests in real-world scenarios are continuously emerging, especially in the context of LLMs, which may lead to accumulated model utility loss that eventually becomes unacceptable. Moreover, existing LLM unlearning methods often ignore previous data access limitations due to privacy concerns and copyright protection. Without previous data, the utility preservation during unlearning is much harder. To overcome these challenges, we propose the \OOO{} framework that includes an \underline{\textit{O}}rthogonal low-rank adapter (LoRA) for continually unlearning requested data and an \underline{\textit{O}}ut-\underline{\textit{O}}f-Distribution (OOD) detector to measure the similarity between input and unlearning data. The orthogonal LoRA achieves parameter disentanglement among continual unlearning requests. The OOD detector is trained with a novel contrastive entropy loss and utilizes a glocal-aware scoring mechanism. During inference, our \OOO{} framework can decide whether and to what extent to load the unlearning LoRA based on the OOD detector's predicted similarity between the input and the unlearned knowledge. Notably, \OOO{}'s effectiveness does not rely on any retained data. We conducted extensive experiments on \OOO{} and state-of-the-art LLM unlearning methods across three tasks and seven datasets. The results indicate that \OOO{} consistently achieves the best unlearning effectiveness and utility preservation, especially when facing continuous unlearning requests. The source codes can be found at \url{https://github.com/GCYZSL/O3-LLM-UNLEARNING}.
[ "Continual Unlearning", "Large Language Models" ]
Accept (Poster)
https://openreview.net/pdf?id=Essg9kb4yx
https://openreview.net/forum?id=Essg9kb4yx
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yO9EBIfvnG", "uZtop4SjYk", "uT5FUnHOBt", "rv2VBBlE3j", "r3o5XtmFzu", "mG9jIu7ROC", "k21B7M3IBo", "hihP1LI18C", "fIHkuix0AV", "ekIjEAP8f7", "cfaXSZ7paZ", "cGwb4oTKyx", "XhCG25Zzx6", "W9ZdLDgnQH", "UqNgQcmaOc", "S9CqlicCXF", "PWkFjkmnJk", "IE7Io3VbIZ", "HHNr0766dH", "EwMOrfpKkQ", "BZI9LFKDhA", "BSqZa5somE", "4xjXWkDwG4", "3ddO3V7pOo" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732482377189, 1732666967728, 1732126122035, 1732126287683, 1734519831989, 1732126014302, 1732126082712, 1732492631577, 1732628108382, 1732659847542, 1732126196403, 1732126398254, 1732492477891, 1732492575170, 1732614961883, 1730680906334, 1732126047355, 1730703756978, 1737523930612, 1732126153312, 1732126362449, 1732125946354, 1730619843637, 1732126319793 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8760/Reviewer_9mDw" ], [ "ICLR.cc/2025/Conference/Submission8760/Authors" ], [ "ICLR.cc/2025/Conference/Submission8760/Authors" ], [ "ICLR.cc/2025/Conference/Submission8760/Authors" ], [ "ICLR.cc/2025/Conference/Submission8760/Area_Chair_Atig" ], [ "ICLR.cc/2025/Conference/Submission8760/Authors" ], [ "ICLR.cc/2025/Conference/Submission8760/Authors" ], [ "ICLR.cc/2025/Conference/Submission8760/Authors" ], [ "ICLR.cc/2025/Conference/Submission8760/Authors" ], [ "ICLR.cc/2025/Conference/Submission8760/Reviewer_J9vg" ], [ "ICLR.cc/2025/Conference/Submission8760/Authors" ], [ "ICLR.cc/2025/Conference/Submission8760/Authors" ], [ "ICLR.cc/2025/Conference/Submission8760/Authors" ], [ "ICLR.cc/2025/Conference/Submission8760/Authors" ], [ "ICLR.cc/2025/Conference/Submission8760/Reviewer_nwA1" ], [ "ICLR.cc/2025/Conference/Submission8760/Reviewer_J9vg" ], [ "ICLR.cc/2025/Conference/Submission8760/Authors" ], [ "ICLR.cc/2025/Conference/Submission8760/Reviewer_9mDw" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8760/Authors" ], [ "ICLR.cc/2025/Conference/Submission8760/Authors" ], [ "ICLR.cc/2025/Conference/Submission8760/Authors" ], [ "ICLR.cc/2025/Conference/Submission8760/Reviewer_nwA1" ], [ "ICLR.cc/2025/Conference/Submission8760/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thanks for the reply\", \"comment\": \"I thank the authors for their reply. I will keep my current score.\"}", "{\"comment\": \"Dear Reviewer J9vg,\\n\\nWe sincerely appreciate your reply and rating raise! Thank you again for your constructive comments and insightful suggestions. We are so glad that your concerns have been addressed, which significantly enhanced the quality of our work. We would also appreciate any further feedback or questions.\\n\\nBest Regards,\\n\\nAuthors of Paper 8760\"}", "{\"title\": \"Concern about whether O3 unlearns multi-entity knowledge\", \"comment\": \"We thank you for suggesting a more realistic setting involving multiple knowledge entities to be unlearned per request.\\n\\nIn response, we conducted experiments on the ScienceQA dataset, where we sequentially unlearned combinations of knowledge domains: biology and physics, followed by chemistry and economics. For each unlearning request, we mixed data samples from the two respective knowledge domains and followed the same continual unlearning process detailed in our paper for the ScienceQA dataset: (biology+physics)&rarr;(chemistry+economics). To evaluate the performance of our proposed O3 framework, we compared it with PO and SOPO, which were identified by Jia et al. [5] as superior to other baseline methods. As shown in the table below, our O3 framework significantly outperforms both baselines under this more complex scenario. These results demonstrate that **OOD detectors trained on multiple unlearning requests are robust and maintain strong performance, even in scenarios involving the unlearning of multiple knowledge entities**.\\n\\n\\n| | SU&darr; | DU&darr; | RD&uarr; | CommonQA&uarr; | OpenbookQA&uarr; | | SU&darr; | DU&darr; | RD&uarr; | CommonQA&uarr; | OpenbookQA&uarr; |\\n|-------|------|------|------|--------|----------|---|------|------|------|--------|----------|\\n| PO | 31.2 | 32.4 | 92.1 | 76.5 | 76.6 | | 30.7 | 29.3 | 90.9 | 75.0 | 75.6 |\\n| SOPO | 27.9 | 28.7 | **91.9** | 77.1 | 79.8 | | 23.5 | 23.1 | 91.1 | 76.0 | 77.0 |\\n| O3 | **19.9** | **26.5** | 91.6 | **78.2** | **80.0** | | **15.6** | **20.1** | **91.3** | **78.2** | **80.0** |\\n\\nAdditionally, we would like to clarify that, **in our main experiments, a single unlearning request still encompasses multiple distinct knowledge entities.** For instance, in the ScienceQA dataset, a particular request represents all knowledge related to a particular field, which can be broken down into multiple entities. Like the first request, biology, includes knowledge related to genes, plants, animals, and more. Similarly, in the CLINC dataset, each unlearning request comprises various intents, which can also be considered as different types of knowledge. For example, the banking domain includes intents such as transferring funds, freezing accounts, reporting fraud, and others. Lastly, in the TOFU dataset, each request contains information associated with different authors, illustrating the concept of multiple knowledge entities within a single request.\"}", "{\"title\": \"Question about why retained data is unacceptable in practice\", \"comment\": \"> Question about why retained data is unacceptable in practice.\\n\\nTo begin with, let us think about the general data availability in LLM unlearning. For the data needed to be unlearned, we assume they are available during the unlearning operation. The origins of such unlearning data can be the unlearning requester or the LLM service provider, which depends on the application scenarios. After the unlearning, **such unlearning data becomes unavailable due to data privacy, intellectual property, and usage authorization regulations**. However, the retained training dataset of the target LLM cannot be assumed to be entirely available during unlearning due to these regulations, especially in sensitive areas like healthcare and finance, where maintaining access to personal or confidential data for utility preservation is not feasible. In this case, the best condition we can assume is that there are a small number of retained data samples. But our experiments in Appendix B.1 of the main paper, show that the performance of EUL and SOGD starts to degrade when there are 20% retained samples, while all approaches degrade sigificantly when there are 5% retained samples. Since the original retained sample number is approximately 5,000, 20% samples correspond to 1,000, and even for 5%, there are 250 samples. We conduct additional experiments on fictitious knowledge generation and intent classification to investigate further the importance of the retained data quantity to existing LLM unlearning approaches in Appendix E2. Specifically, we reduce the accessible retained dataset to 10% and 1% and carry out the experiments. We observe that all these approaches perform much poorer than when they can access the sufficient retained data. In particular, the metrics corresponding to the utility preservation drop significantly, similar to the observed phenomenon in our empirical study (Appendix B). These results validate the necessity of retained data for these LLM unlearning approaches. \\n\\nIn practice, **it is difficult for the LLM service provider to collect sufficient data from the tasks most susceptible to unlearning.** The difficulties lie in several facets. First, characterizing and localizing the tasks susceptible to unlearning is difficult (please refer to Appendix G.2 for more discussion). Second, their corresponding data may be limited. For example, malicious backdoors of LLM are implanted in rare behaviors, LLM users request unlearning highly related to private information, and some professional knowledge becomes outdated and incorrect over time. The tasks susceptible to these unlearning requests intrinsically correspond to limited or inaccessible data. Moreover, the retained data should be annotated with accurate labels, increasing the difficulty of sufficient data collection. In conclusion, the existing language model unlearning approaches cannot work effectively with limited retained data, which is common in real-world LLM unlearning applications.\\n\\n\\n\\nMoreover, as the data from the tasks or distributions most susceptible to the unlearning requests is hard to acquire, **one of the possible solutions is to leverage the data from other irrelevant distributions**, as the experiments in Appendix B.1. We substitute different quantity of original retained data of ScienceQA with equal numbers of samples from CommonsenseQA to conduct the experiments. We observe that all baseline approaches drop significantly when 90% retained samples come from non-ScienceQA. With such observation, we conclude that using data from other distributions brings little gain in retaining the performance on unlearning-susceptible distributions. This further demonstrates **the importance for existing LLM unlearning approaches to access sufficient retained data from the unlearning-susceptible distributions**, which are challenging in practice.\\n\\nIn summary, we still **don't think it is realistic and practical to assume the availability of a well-prepared retained dataset** that is most susceptible to the unlearning. However, we believe **it is worthy to explore the retained data selection for preserving LLM's general utility** during unlearning in the future. The potential techniques may be utilizing some interpretable machine learning techniques [11] to locate the neurons activated by the unlearning data. Based on these identified activated neurons, we could retrieve similar data from some database (retrieval augmented generation or active learning) as the retained data.\"}", "{\"metareview\": \"This paper presents a solution to the Continual Unlearning problem, in which the model provider attempts to continuously erase the influence of requested data. The authors motivate their problem formulation because existing model unlearning methods fail to account for the scenario where the unlearning requests emerge continuously, and the model provider lacks access to previous data. To this end, the authors propose a continual unlearning framework that includes LoRA adapters with orthogonal components to minimize the interference among different unlearning requests, along with an OoD detector that measures the similarity between input and unlearning data to decide on how unlearning LoRAs should be loaded. This paper is well-written and motivated by real-world scenarios that require unlearning. The authors solve the critical challenge of LLM unlearning by getting rid of the access to the retained data. The design of orthogonal LoRA demonstrates significant improvement in evaluation, and the design choices for the proposed unlearning pipeline are justified through ablation studies. While the reviewers had some concerns about the evaluation, the authors did a particularly good job in their rebuttal. Therefore, all of us have agreed to accept this paper for publication! Please include the additional discussion in the next version.\", \"additional_comments_on_reviewer_discussion\": \"Some reviewers raise the score after the rebuttal.\"}", "{\"title\": \"Additional experiments of unlearning unsafe behaviors like those in benchmark WMDP\", \"comment\": \"> Additional experiments of unlearning unsafe behaviors like those in benchmark WMDP.\\n\\nWe appreciate the reviewers' insightful suggestion to include additional evaluations on unsafe behaviors. In response, we **have conducted experiments using the WMDP benchmark and a detoxification benchmark**.\\n\\nFor the WMDP benchmark, we partitioned the WMDP multiple-choice question dataset into training, validation, and test sets with a 70%/10%/20% split. The dataset focuses on three types of hazardous knowledge: biosecurity, chemical security, and cybersecurity. As noted in the WMDP paper, biosecurity and chemical security are particularly critical areas. Therefore, we prioritized continual unlearning of hazardous knowledge in these two domains.\\n\\nFollowing SOUL [1], we also utilized LLaMA2-7b as the target model. To evaluate our proposed O3 framework, we compared its performance against PO and SOPO [1], which were identified by Jia et al. [1] as superior to other baseline methods. The results, summarized in the table below, **demonstrate that the O3 framework significantly outperforms these baselines in forgetting hazardous knowledge**. Notably, while PO and SOPO rely on access to retained data, our **O3 framework achieves better performance without using any retained data.**\\n| | SU &darr; | DU &darr; | RD &uarr; | | SU &darr; | DU &darr; | RD &uarr; |\\n|------|------|------|------|---|------|------|------|\\n| PO | 45.3 | 33.9 | 55.0 | | 20.4 | 24.8 | 55.9 |\\n| SOPO | 30.0 | 26.4 | 52.5 | | 24.8 | 21.5 | **57.2** |\\n| O3 | **24.6** | **24.8** | **55.3** | | **11.5** | **12.8** | **57.2** |\\n\\nIn Appendix E.5 of the paper, we also present the performance of our proposed method on an unsafe behavior, LLM detoxification, which aims to prevent LLMs from generating toxic content. For this evaluation, we used negative samples from the training set of PKU-SafeRLHF [2], splitting them into three unlearning requests to simulate a continual unlearning scenario. Consistent with prior experiments and the methodology of SOUL [1], we adopted LLaMA2-7b as the target model.\\n\\nThe unlearning effectiveness was assessed using the toxicity score (lower is better) on RealToxicity Prompts (RTP) [3] and PKU-SafeRLHF datasets. Utility preservation was measured by performance (higher is better) on the TruthfulQA benchmark [4]. As shown in the table below, **our O3 framework substantially outperforms the baseline methods PO and SOPO in unlearning effectiveness and utility preservation.** It is important to note that while PO and SOPO leverage access to sufficient retained data, **our O3 framework achieves superior performance without using any retained data**.\\n\\n| | RTP&darr; | PKU-SafeRLHF&darr; | TruthfulQA&uarr; | | RTP&darr; | PKU-SafeRLHF&darr; | TruthfulQA&uarr; | | RTP&darr; | PKU-SafeRLHF&darr; | TruthfulQA&uarr; |\\n|------|--------|--------------|------------|---|--------|--------------|------------|---|--------|--------------|------------|\\n| PO | 0.0678 | 0.0830 | 0.2521 | | 0.0670 | 0.0764 | 0.2522 | | 0.0604 | 0.0711 | 0.2543 |\\n| SOPO | 0.0675 | 0.0802 | 0.2537 | | 0.0625 | 0.0766 | 0.2584 | | 0.0567 | 0.0705 | 0.2644 |\\n| Ours | **0.0569** | **0.0605** | **0.2563** | | **0.0533** | **0.0578** | **0.2622** | | **0.0495** | **0.0462** | **0.2650** |\"}", "{\"title\": \"Concern about the computation cost and scalability\", \"comment\": \"> Concern about the computation cost and scalability.\\n \\nWe thank the reviewer for highlighting their concerns regarding the computational cost and scalability of our O3 framework. Below, we address these aspects and outline potential directions for future improvements.\\n\\n**Computation Cost**: As detailed in Appendix E.1 of our main paper, we measured the time overhead during the inference stage using the Calflops [1] tool with a single batch input. While **baseline methods require more resources during training**, they invoke the LLM during inference, registering 13,215 MFLOPS. In practical system deployment, regardless of the number of unlearning requests processed, our method enables Unlearning Knowledge Detection with OOD detector backbones to operate in parallel [2], consuming only 709.37 MFLOPS. Combined with the Soft-weighted Inference, which requires 13,255 MFLOPS, the total computational overhead is 13,964.37 MFLOPS\\u2014**representing just a 5.6% increase compared to the baselines**. \\n\\nBoth our method and the baselines store the LLM, which occupies 12,862 MB. Additionally, our method introduces OOD-related storage (1,450 MB) and LoRA (39 MB), resulting in a total additional storage requirement of 11.6% over the baselines. Given that storage is typically more affordable and less resource-intensive compared to GPU usage, this overhead is unlikely to pose significant challenges in practical applications, especially for large companies.\\n\\n**Scalability**: As mentioned in the previous computation costs analysis, regardless of the number of unlearning requests processed, our method allows the parallel computation with 5.6\\\\% higher computational cost which is reasonable and manageable for big organizations. **While our method incurs slightly higher overheads than the baselines, these costs bring about significant improvements of unlearning effectiveness and utility preservation, and they are amenable to significant reductions through future optimizations**. For example, replacing the separate embedding model for the OOD detector backbones with the LLM\\u2019s native embedding could reduce storage overheads by nearly 90% and computational consumption by approximately 95%.\\n\\nSpecifically, in our implementation, the OOD detector backbone uses the encoder-only Roberta model. Although this model can extract high-quality representations, its performance is still limited when faced with very complex inputs compared to larger-scale language models. Therefore, we consider directly using the target LLM to detect unlearning knowledge in the future. This approach is feasible because, in the O3 framework, we use LoRA as an external module to achieve unlearning, and the original target LLM is available for inference. We should gain the following benefits if we replace the OOD detector backbone with an LLM. First, LLMs can better capture subtle text differences, improving OOD detection performance. Second, smaller language models like Roberta cannot effectively extract contextual information from complex and long contexts. Thus, if an unlearning request correlates the contextual information, such as the individual users' request to unlearn specific topics from their chat history with ChatGPT, Roberta-based OOD detection cannot achieve this. In contrast, LLMs can extract contextual information well [3], supporting more fine-grained OOD detection and more accurate ID data localization. Finally, using LLMs for OOD detection might eliminate the need for fine-tuning with ID data, as [4] suggested that LLMs could provide accurate OOD detection predictions for text classification without any fine-tuning. This could further improve our framework's efficiency.\"}", "{\"title\": \"Kind Reminder before Reviewer-Author Discussion Phase Closure for Reviewer nwA1\", \"comment\": \"Dear Reviewer nwA1,\\n\\nThe conclusion of the discussion period is closing, and we eagerly await your response. We greatly appreciate your time and effort in reviewing this paper and helping us improve it.\\n\\nThank you again for the detailed and constructive reviews. We hope our response is able to address your comments related to the difference between our work and regular approaches with exact unlearning, the unacceptance of retained data in practice, the technical novelty of O3, and the robustness of O3 against attacks. We take this as a great opportunity to improve our work and shall be grateful for any additional feedback you could give us. We fully understand that you may be busy at this time, but we hope that you could kindly have a quick look at our responses and assess whether they have addressed your concerns and warrant an update to the rating. We would also welcome any additional feedback and questions. We sincerely appreciate your dedication and time again.\\n\\nBest Regards,\\n\\nAuthors of Paper 8760\"}", "{\"title\": \"Thank you for your reply and Further question\", \"comment\": \"> The key purpose of machine unlearning is to remove the data from the existing trained model. Install a shell that may be able to block the access.\\n\\nThank you for your valuable feedback and for raising your rating! Regarding your concerns about the uniqueness of LLM unlearning, we\\u2019d like to share our perspective:\\n\\nFirst, we agree that machine unlearning generally aims to remove specific data from a model. However, **in the context of LLMs, additional considerations are necessary** to ensure unlearning is both reasonable and effective. The key distinction between LLMs and traditional ML models lies in their scale, which introduces two unique challenges: **1) Any updates to LLMs require substantial computational resources; 2) Such updates often result in uncertain and unpredictable changes to the model's utility**. To address these challenges, recent research has explored methods to partially or entirely avoid direct updates to LLMs during unlearning, focusing on reducing computational costs and mitigating utility loss.\\n\\nAdditionally, from our perspective, **unlearning is not solely about the process\\u2014it\\u2019s the outcome that truly matters**. Specifically, effective unlearning should be reflected in the model\\u2019s inference behavior: knowledge that has been \\\"unlearned\\\" must result in distinct inference outputs compared to retained knowledge. In this regard, **our O3 approach aligns with traditional unlearning methodologies**.\\n\\nLastly, **implementing access controls, such as filters or shells, to block knowledge as a means of unlearning is inherently ineffective**. As demonstrated in our main paper, even state-of-the-art OOD detection methods perform poorly in achieving accurately hard-label filtering in language. Furthermore, no reliable mechanisms currently exist for soft access control in LLMs that can be used for unlearning. Even if future advancements improve filtering accuracy, access blocking as a method of unlearning has a critical limitation: **it cannot generate appropriate responses for blocked queries (i.e., unlearned knowledge)**. Simple refusals or random outputs are insufficient, as various inference attacks\\u2014such as membership, attribute, and property attacks\\u2014can easily extract sensitive information related to the unlearned knowledge.\\n\\nIn conclusion, we believe there are many exciting and important directions for improving LLM unlearning, and we hope this field continues to evolve. Thank you again for your constructive and insightful comments!\"}", "{\"title\": \"Post-rebuttal Comments\", \"comment\": \"Thanks for the authors' feedback. It has addressed my concerns, and I raise my rate of evaluation.\"}", "{\"title\": \"Concerns about O3 cannot exactly unlearn knowledge with the current external blocking design\", \"comment\": \"> Concerns about O3 cannot exactly unlearn knowledge with the current external blocking design.\\n\\nWe appreciate your insightful perspectives on unlearning knowledge exactly from LLMs. We would like to address that, in real-world scenarios, **it is often unnecessary and unpractical to unlearn knowledge exactly from LLMs**. This holds true from both closed-source and open-source LLM perspectives.\\n\\nFirstly, in practice, the most widely applied and powerful models, such as Gemini, GPT-4, and Claude, are predominantly closed-source. After the unlearning process, there is no guarantee that the company will deploy which model, including the original and unlearned models, for inference, which **poses a general challenge for LLM unlearning** from a security perspective. This issue can be addressed using **secure inference** methods based on multi-party computation (MPC) or zero-knowledge proofs (ZKP), **which can verify that every inference is generated by the unlearned model**. Notably, these approaches apply equally to both the exactly unlearned model and our proposed architecture. In other words, whether using exact unlearning or our O3 framework, both can be treated as black-box functions and verified by MPC or ZKP without any difference for closed-source models. We plan to implement secure inference for O3 in the future.\\n\\nFurthermore, for closed-source models, which often contain hundreds of billions of parameters, **unlearning the model exactly is computationally expensive.** Additionally, **unlearning can lead to unpredictable performance degradation in the utility functionality** of the LLM [1,2,3,4]. These challenges are even more pronounced in continual unlearning settings [5,6]. Our experiments in Appendix B.2 of the main paper also demonstrate that with continuously arriving unlearning requests, e.g. daily users periodically want to delete dialog history, catastrophic forgetting accumulates over time. Therefore, for owners of large closed-source models, conducting exact unlearning on the original LLM, espicially in continuous scenarios, is less favorable compared to adopting our proposed method.\\n\\nFor open-source models, while the cost of unlearning the exact model is reduced due to fewer parameters, the problem of **accumulated utility performance degradation persists**, as noted by Gu et al. [5] and demonstrated by our experiments in Appendix B.2. Additionally, it is infrequent for open-source model providers, such as those behind the LLaMA, Gemma, and Phi series, to update their models regularly. In such cases, it is often more practical to train a new version of the model without the data that needs to be unlearned. For users of open-source models who need to **unlearn frequently**, e.g., when the knowledge becomes outdated and incorrect over time, **our method is particularly attractive due to its lower training computational requirements, better unlearning performance and less significant impact on utility performance.**\\n\\nIn summary, for most practical scenarios where unlearning is required, our proposed method offers a viable alternative compared with so-called exact unlearning based on model editing. **It reduces computational demands, achieves better unlearning performance and minimizes utility performance degradation in continual settings**, making it a more practical solution for both closed-source and open-source models.\\n\\nBesides, our O3 framework is not simply putting two external modules to block the input and output of unlearning related targets. Owing to the innovative architecture design and the proposal of well-crafted OOD module and orthogonal LoRA, O3 can be such advantageous with the above practical benefits. Please refer to our following response in terms of more detailed O3's technical novelty.\"}", "{\"title\": \"Reference\", \"comment\": \"[1] Liu, Sijia, et al. \\\"Rethinking machine unlearning for large language models.\\\" arXiv preprint arXiv:2402.08787 (2024).\\n\\n[2] Jia, Jinghan, et al. \\\"Soul: Unlocking the power of second-order optimization for llm unlearning.\\\" EMNLP (2024).\\n\\n[3] Yao, Yuanshun, Xiaojun Xu, and Yang Liu. \\\"Large language model unlearning.\\\" arXiv preprint arXiv:2310.10683 (2023).\\n\\n[4] Zhang, Ruiqi, et al. \\\"Negative preference optimization: From catastrophic collapse to effective unlearning.\\\" arXiv preprint arXiv:2404.05868 (2024).\\n\\n[5] Gu, Jia-Chen, et al. \\\"Model editing harms general abilities of large language models: Regularization to the rescue.\\\" EMNLP (2024).\\n\\n[6] Gupta, Akshat, Anurag Rao, and Gopala Anumanchipalli. \\\"Model editing at scale leads to gradual and catastrophic forgetting.\\\" arXiv preprint arXiv:2401.07453 (2024).\\n\\n[7] Gao, Tianyu, Xingcheng Yao, and Danqi Chen. \\\"Simcse: Simple contrastive learning of sentence embeddings.\\\" EMNLP (2021).\\n\\n[8] De Maesschalck, Roy, Delphine Jouan-Rimbaud, and D\\u00e9sir\\u00e9 L. Massart. \\\"The mahalanobis distance.\\\" Chemometrics and intelligent laboratory systems (2000).\\n\\n[9] Chen, Jiefeng, et al. \\\"Robust out-of-distribution detection for neural networks.\\\" AAAI (2022).\\n\\n[10] Shi, Weijia, et al. \\\"Detecting pretraining data from large language models.\\\" ICLR (2023).\\n\\n[11] Singh, Chandan, et al. \\\"Rethinking interpretability in the era of large language models.\\\" arXiv preprint arXiv:2402.01761 (2024).\\n\\n[12] Hu, Shengyuan, et al. \\\"Jogging the Memory of Unlearned Model Through Targeted Relearning Attack.\\\" arXiv preprint arXiv:2406.13356 (2024).\"}", "{\"title\": \"Thanks to Reviewer 9mDw\", \"comment\": \"Dear Reviewer 9mDw,\\n\\nWe really appreciate your reply! Thank you again for your positive review and insightful suggestions. We are so glad that your concern could be addressed. We also appreciate any further feedback and questions.\\n\\nBest Regards,\\n\\nAuthors of Paper 8760\"}", "{\"title\": \"Kind Reminder before Reviewer-Author Discussion Phase Closure for Reviewer J9vg\", \"comment\": \"Dear Reviewer J9vg,\\n\\nThank you for your initial constructive comments and insightful suggestions. We greatly appreciate your time and effort in reviewing this paper and helping us improve it.\\n\\nWe hope our response is able to address your comments related to the computation costs, additional evaluation of multi-entity unlearning, and scalability analysis with larger number of requests. We take this as a great opportunity to improve our work and shall be grateful for any additional feedback you could give us. Your feedback is really important to us. We eagerly await any potential updates to your ratings, as they play a critical role in the assessment of our paper. Your thoughtful evaluation greatly aids in our paper's refinement and strength. We sincerely appreciate your dedication and time again.\\n\\nBest Regards,\\n\\nAuthors of Paper 8760\"}", "{\"comment\": \"I thank the authors for their reply. These results partially resolve my question.\\n\\nThe key purpose of machine unlearning is to remove the data from the existing trained model. Install a shell that may be able to block the access. This part is not convincing. Further details with attack evaluation, is ok to me. I will increase my rank to 6.\"}", "{\"summary\": \"The authors address significant practical challenges in developing machine unlearning techniques for large language models (LLMs), where current state-of-the-art approaches fall short due to their dependency on retained data and inability to manage continual unlearning requests effectively. To overcome these limitations, they propose the O^3 framework, which introduces an orthogonal low-rank adapter to enable continuous unlearning of requested data and an out-of-distribution detector to assess the similarity between incoming inputs and unlearning data. Comprehensive experiments show that O^3 achieves notably higher unlearning effectiveness and utility preservation than existing methods, all without relying on retained data, even under continuous unlearning conditions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The authors solve the critical challenge of LLM unlearning by getting rid of the access to the retained data. The design of orthogonal LoRA demonstrates significant improvement in evaluation.\\n\\n2. The authors conduct extensive experiments to evaluate the effectiveness of the proposed O^3 method.\", \"weaknesses\": \"1. During the inference, each testing instance x will be fed into all OOD detector backbones. This might limit the method's scalability when the unlearning requests increase due to the higher computational cost.\\n\\n2. The experiments focus on the QA datasets, where each query only contains a single knowledge entity to be unlearned. The authors might need to evaluate the framework under more challenging and realistic settings, wherein for each query, there might be multiple knowledge entities to be unlearned. I am concerned about this because the OOD detectors are trained on single unlearning requests.\\n\\n3. The authors should improve the scale of unlearning requests (more than just 4 requests) to validate the claimed contribution that the O^3 framework can handle continual unlearning settings.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Evaluation of O3's robustness against Targeted Relearning Attacks\", \"comment\": \"> Evaluation of O3's robustness against Targeted Relearning Attacks.\\n\\nWe appreciate the reviewers' suggestion to evaluate the robustness of our O3 framework under Targeted Relearning Attacks. To experiment on this, we followed the targeted relearning attack using public information setting described in [5]. Specifically, we relearned the unlearned ScienceQA model using the validation set of the OpenbookQA dataset, which contains science-related questions relevant to the ScienceQA benchmark.\\n\\nIn our experiment, we first unlearned the model sequentially across four science domains in the ScienceQA dataset\\u2014biology \\u2192 physics \\u2192 chemistry \\u2192 economics\\u2014following the same methodology presented in our main paper. We then applied the targeted relearning attack using the validation set of OpenbookQA to relearn the unlearned knowledge.\\n\\nWe evaluated the performance of PO [1], SOPO [1], and our O3 framework before and after the relearning attack for the last unlearning requst, as shown in the table below. The results demonstrate that **our O3 framework is significantly more robust, achieving the best post-attack performance**. For instance, in the case of Distribution-level Unlearning, the performance drop for O3 was only 3.7, compared to 24 and 30.3 for PO and SOPO, respectively. We believe that the robustness against relearning is important and essential in real-world, and we plan to expore more in the future.\\n\\n| | SU &darr; | DU &darr; | RD &uarr; | CommonQA &uarr; | OpenbookQA &uarr; |\\n|----------------|------|------|------|--------|----------|\\n| PO | 59.9 | 58.7 | 90.8 | 75.8 | 77.6 |\\n| Relearned FOPO | 86.2 | 82.7 | 91.3 | 76.7 | 78.0 |\\n| | | | | | |\\n| SOPO | 29.6 | 27.9 | 89.7 | 76.8 | 77.8 |\\n| Relearned SOPO | 60.9 | 58.2 | 89.4 | 74.4 | 72.0 |\\n| | | | | | |\\n| O3 | 9.3 | 14.0 | 91.1 | 78.5 | 80.8 |\\n| Relearned O3 | 15.5 | 17.7 | 89.6 | 75.0 | 72.6 |\\n\\n[1] Jia, Jinghan, et al. \\\"Soul: Unlocking the power of second-order optimization for llm unlearning.\\\" EMNLP (2024).\\n\\n[2] Ji, Jiaming, et al. \\\"Beavertails: Towards improved safety alignment of llm via a human-preference dataset.\\\" NeurIPS (2024).\\n\\n[3] Gehman, Samuel, et al. \\\"Realtoxicityprompts: Evaluating neural toxic degeneration in language models.\\\" ACL (2020).\\n\\n[4] Lin, Stephanie, Jacob Hilton, and Owain Evans. \\\"Truthfulqa: Measuring how models mimic human falsehoods.\\\" ACL (2021).\\n\\n[5] Hu, Shengyuan, et al. \\\"Jogging the Memory of Unlearned Model Through Targeted Relearning Attack.\\\" arXiv preprint arXiv:2406.13356 (2024).\"}", "{\"summary\": \"This paper presents a solution to the Continual Unlearning problem, in which the model provider attempts to continuously erase the influence of requested data. The authors motivate their problem formulation because existing model unlearning methods fail to account for the scenario where the unlearning requests emerge continuously, and the model provider lacks access to previous data. To this end, the authors propose a continual unlearning framework that includes LoRA adapters with orthogonal components to minimize the interference among different unlearning requests, along with an OoD detector that measures the similarity between input and unlearning data to decide on how unlearning LoRAs should be loaded.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper is well-written and motivated by real-world scenarios that require unlearning.\\n2. The design choices for the proposed unlearning pipeline are justified through ablation studies.\", \"weaknesses\": \"The authors partially motivate machine unlearning as \\u201c a representative approach for model safety and security by removing the influence of undesired data on the target model.\\u201d I very much agree with the assertion. However, the evaluations of the proposed methods mainly focus on unlearning knowledge instead of unsafe behaviors. The usefulness of the proposed method could benefit from additional evaluations against safety-oriented unlearning benchmarks such as WMDP [1].\\n\\n[1] Li, N., Pan, A., Gopal, A., Yue, S., Berrios, D., Gatti, A., Li, J.D., Dombrowski, A.K., Goel, S., Phan, L. and Mukobi, G., 2024. The wmdp benchmark: Measuring and reducing malicious use with unlearning. arXiv preprint arXiv:2403.03218.\", \"questions\": \"How robust is the proposed method against \\u201dtargeted relearning attacks\\u201d? For example, in a safety-critical scenario, can the supposedly unlearned knowledge be solicited again after finetuning the model on related public datasets? [2]\\n\\n[2] Hu, S., Fu, Y., Wu, Z.S. and Smith, V., 2024. Jogging the Memory of Unlearned Model Through Targeted Relearning Attack. arXiv preprint arXiv:2406.13356.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Scale the unlearning with more requests\", \"comment\": \"> Scale the unlearning with more requests\\n\\nThank you so much for the suggestion of experimenting on more unlearning requests. We carried out experiments by dividing the TOFU-forget05 and TOFU-forget10 into 5 and 10 unlearning requests, respectively. In this way, each unlearning request contains information of 2 fictitious authors. To better validate the effectiveness of our O3 framework, we also conduct experiments using PO and SOPO. The detailed experiments are shown in tables below, from which we can observe that the **O3 framework substantially exceeds other baselines** in unlearning effectiveness and utility preservation. Besides, **as the number of unlearning requests increases, the strengths of our O3 framework become more evident.**\\n\\n**Unlearning 5 requests**\\n| Request | | | 1 | | | | | | 2 | | | | | | 3 | | | | | | 4 | | | | | | 5 | | |\\n|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\\n| | SU&darr; | DU&darr; | RD&uarr; | RA&uarr; | WF&uarr; | | SU&darr; | DU&darr; | RD&uarr; | RA&uarr; | WF&uarr; | | SU&darr; | DU&darr; | RD&uarr; | RA&uarr; | WF&uarr; | | SU&darr; | DU&darr; | RD&uarr; | RA&uarr; | WF&uarr; | | SU&darr; | DU&darr; | RD&uarr; | RA&uarr; | WF&uarr; |\\n| PO | 17.0 | 21.1 | 80.4 | 86.5 | 84.0 | | 30.7 | 35.6 | 82.0 | 82.4 | 83.2 | | 38.6 | 41.1 | 81.5 | 81.5 | 82.7 | | 46.0 | 55.2 | 80.0 | 81.8 | 82.0 | | 50.4 | 56.7 | 77.9 | 80.3 | 81.0 |\\n| SOPO | 24.4 | 25.0 | 84.2 | 86.6 | 85.0 | | 34.5 | 36.8 | 81.7 | 82.8 | 82.5 | | 40.7 | 38.8 | 80.9 | 82.0 | 81.3 | | 45.6 | 47.0 | 77.9 | 80.9 | 79.8 | | 54.2 | 56.6 | 75.2 | 76.8 | 76.5 |\\n| O3 | 13.0 | 14.5 | 85.7 | 89.0 | 86.3 | | 14.2 | 14.0 | 85.5 | 89.0 | 86.0 | | 16.2 | 17.5 | 85.5 | 88.8 | 86.3 | | 16.5 | 18.4 | 85.4 | 88.6 | 86.2 | | 17.0 | 19.2 | 85.2 | 88.8 | 86.0 |\\n\\n**Unlearning 10 requests**\\n| Request | | | 1 | | | | | | 2 | | | | | | 3 | | | | | | 4 | | | | | | 5 | | |\\n|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\\n| | SU&darr; | DU&darr; | RD&uarr; | RA&uarr; | WF&uarr; | | SU&darr; | DU&darr; | RD&uarr; | RA&uarr; | WF&uarr; | | SU&darr; | DU&darr; | RD&uarr; | RA&uarr; | WF&uarr; | | SU&darr; | DU&darr; | RD&uarr; | RA&uarr; | WF&uarr; | | SU&darr; | DU&darr; | RD&uarr; | RA&uarr; | WF&uarr; |\\n| PO | 20.4 | 25.7 | 81.5 | 86.5 | 84.8 | | 33.2 | 34.0 | 81.0 | 85.7 | 83.8 | | 42.2 | 44.8 | 80.3 | 84.0 | 81.4 | | 48.9 | 50.7 | 78.8 | 81.8 | 80.5 | | 50.5 | 52.7 | 77.9 | 80.6 | 80.2 |\\n| SOPO | 27.7 | 31.6 | 83.7 | 86.5 | 84.3 | | 32.8 | 35.5 | 82.0 | 82.9 | 83.7 | | 40.2 | 41.7 | 80.9 | 82.4 | 81.0 | | 47.8 | 47.0 | 80.4 | 81.5 | 80.5 | | 50.6 | 54.6 | 78.7 | 79.5 | 79.8 |\\n| O3 | 14.0 | 14.7 | 85.8 | 89.0 | 86.3 | | 14.2 | 15.5 | 85.8 | 89.0 | 86.0 | | 15.7 | 16.8 | 85.5 | 88.8 | 86.2 | | 16.5 | 17.7 | 85.2 | 88.6 | 86.2 | | 17.0 | 20.4 | 85.0 | 88.4 | 86.0 |\\n| **Request** | | | 6 | | | | | | 7 | | | | | | 8 | | | | | | 9 | | | | | | 10 | | |\\n| PO | 56.0 | 60.4 | 74.5 | 79.3 | 79.8 | | 58.3 | 62.7 | 72.0 | 78.6 | 77.9 | | 60.8 | 62.7 | 70.5 | 77.0 | 78.0 | | 61.1 | 62.9 | 70.8 | 76.9 | 77.3 | | 62.0 | 63.5 | 70.2 | 76.8 | 77.2 |\\n| SOPO | 54.4 | 61.2 | 76.7 | 78.2 | 79.0 | | 55.8 | 59.1 | 78.0 | 77.9 | 77.8 | | 57.2 | 61.0 | 78.2 | 77.5 | 76.8 | | 57.9 | 60.5 | 76.4 | 75.7 | 76.3 | | 60.0 | 62.6 | 76.7 | 76.5 | 75.4 |\\n| O3 | 18.4 | 20.8 | 85.0 | 88.2 | 86.0 | | 20.4 | 22.5 | 84.8 | 88.0 | 85.7 | | 20.7 | 24.0 | 85.0 | 88.0 | 85.5 | | 20.0 | 22.3 | 85.2 | 88.2 | 86.0 | | 23.4 | 25.0 | 85.0 | 87.5 | 85.5 |\\n\\n[1] https://github.com/MrYxJ/calculate-flops.pytorch\\n\\n[2] Agrawal, Amey, et al. \\\"Vidur: A Large-Scale Simulation Framework For LLM Inference.\\\" MLSys (2024).\\n\\n[3] Ding, Yiran, et al. \\\"Longrope: Extending llm context window beyond 2 million tokens.\\\" ICML (2024).\\n\\n[4] Uppaal, Rheeya, Junjie Hu, and Yixuan Li. \\\"Is fine-tuning needed? pre-trained language models are near perfect for out-of-domain detection.\\\" ACL (2023).\\n\\n[5] Jia, Jinghan, et al. \\\"Soul: Unlocking the power of second-order optimization for llm unlearning.\\\" EMNLP (2024).\"}", "{\"title\": \"Concern about O3's robustness against attacks\", \"comment\": \"> Concern about O3's robustness against attacks.\\n\\nWe appreciate your insights on potential attack methods. To demonstrate the robustness of our approach against such attacks, we conducted experiments involving **adversarial attacks designed to bypass unlearning knowledge detection, membership inference attacks, and targeted relearning attacks**. Detailed descriptions and results of these experiments are provided in Appendix F.2 and Appendix E.4 of the main paper.\\n\\nIn the real-world deployment of our O3 framework, there may be a concern that malicious attackers apply adversarial attacks to bypass unlearning knowledge detection. Therefore, we conduct experiments to investigate the possibility of such cases. Specifically, we implement an adversarial attack [9] against OOD detection that injects a certain perturbation to fool the OOD detector into identifying ID data as OOD data. In the context of textual data, we leverage heuristic replacement on characters to generate such perturbation. The experiments on TOFU show that the AUROC has no significant drop and the continual unlearning effectiveness remains nearly unchanged. The AUROC is measured between the unlearning data and the retained data distributions. We can conclude that **it is hard to bypass the unlearning knowledge detection and our O3 framework is robust**.\\n\\n\\n| | SU | DU | AUROC | | SU | DU | AUROC | | SU | DU | AUROC |\\n|-----------------|------|------|-------|---|------|------|-------|---|------|------|-------|\\n| Ours w/ Attack | 12.5 | 15 | 97.5 | | 16.4 | 19.8 | 92.5 | | 17.4 | 19.5 | 92.2 |\\n| Ours w/o Attack | 12.5 | 14.4 | 97.8 | | 15.8 | 20.3 | 92.5 | | 15.5 | 19.7 | 93 |\\n\\n\\n\\nWe also conducted Membership Inference Attacks (MIA) on the ScienceQA dataset following [2]. The training data for the pre-trained model contains the training data of the unlearning request, and the model can distinguish the unseen data in the test set from the unlearning request [10]. After the unlearning, the less distinguishable between the training and test data of the unlearning requests for the model means the model can better resist MIA to achieve more effective unlearning. We assessed the vulnerability using the MIN-k\\\\%-based MIA with the AUC metric. A lower AUC indicates that the model can less distinguish between training and test data of the unlearning requests, which is preferable for resistance against MIAs. We compared O3 with SOPO, which was identified by Jia et al. [2] as superior to other baseline methods. As shown in Table below, **our method consistently outperformed SOPO**. For instance, at k=10, our method achieved an AUC of 0.559, which is lower than SOPO\\u2019s AUC of 0.655. Similarly, k=30/60, our AUC remained at 0.55, compared to SOPO\\u2019s AUC of 0.65. \\n\\n\\n| k | 5 | 10 | 20 | 30 | 40 | 50 | 60 |\\n|------|---------|---------|---------|---------|---------|---------|---------|\\n| SOPO | 0.673 | 0.655 | 0.652 | 0.652 | 0.652 | 0.653 | 0.653 |\\n| Ours | 0.568 | 0.559 | 0.553 | 0.553 | 0.553 | 0.553 | 0.553 |\\n\\nBesides, we also conduct Targeted Relearning Attacks (thanks for Reviewer 9mDw's suggestion). To experiment on this, we followed the targeted relearning attack using public information setting described in [12]. Specifically, we relearned the unlearned ScienceQA model using the validation set of the OpenbookQA dataset, which contains science-related questions relevant to the ScienceQA benchmark.\\n\\nIn our experiment, we first unlearned the model sequentially across four science domains in the ScienceQA dataset\\u2014biology \\u2192 physics \\u2192 chemistry \\u2192 economics\\u2014following the same methodology presented in our main paper. We then applied the targeted relearning attack using the validation set of OpenbookQA to relearn the unlearned knowledge.\\n\\nWe evaluated the performance of PO, SOPO, and our O3 framework before and after the relearning attack for the last unlearning requst, as shown in the table below. The results demonstrate that **our O3 framework is significantly more robust, achieving the best post-attack performance**. For instance, in the case of Distribution-level Unlearning, the performance drop for O3 was only 3.7, compared to 24 and 30.3 for PO and SOPO, respectively.\\n\\n| | SU &darr; | DU &darr; | RD &uarr; | CommonQA &uarr; | OpenbookQA &uarr; |\\n|----------------|------|------|------|--------|----------|\\n| PO | 59.9 | 58.7 | 90.8 | 75.8 | 77.6 |\\n| Relearned FOPO | 86.2 | 82.7 | 91.3 | 76.7 | 78.0 |\\n| | | | | | |\\n| SOPO | 29.6 | 27.9 | 89.7 | 76.8 | 77.8 |\\n| Relearned SOPO | 60.9 | 58.2 | 89.4 | 74.4 | 72.0 |\\n| | | | | | |\\n| O3 | 9.3 | 14.0 | 91.1 | 78.5 | 80.8 |\\n| Relearned O3 | 15.5 | 17.7 | 89.6 | 75.0 | 72.6 |\"}", "{\"title\": \"Response to all reviewers\", \"comment\": \"We would like to thank all the reviewers for their constructive comments and suggestions. In particular, we are sincerely grateful to Reviewers 9mDw, J9vg, and nwA1 for recognizing the motivation behind our work, which focuses on real-world LLM unlearning scenarios where the retained dataset may be unavailable, unlearning requests are continual, and LLM owners are reluctant to modify their base models due to significant computational costs and potential performance uncertainties. Besides, we also thank Reviewer 9mDw for their acknowledgment of the quality of our writing. Furthermore, we deeply appreciate the positive feedback from Reviewers 9mDw and J9vg regarding the extensive experiments conducted in our study, which convincingly demonstrate the effectiveness of the proposed method. Then we provide detailed responses for individual questions separately.\"}", "{\"summary\": \"This paper presents the O3 framework, designed to address continual unlearning requests in LLMs without relying on retained data, a common limitation in existing methods. The O3 framework integrates an orthogonal low-rank adapter (LoRA) for unlearning requests and an Out-Of-Distribution (OOD) detector for measuring input similarity with unlearned data. The orthogonal LoRA prevents interference across multiple unlearning requests, while the OOD detector leverages a novel contrastive entropy loss and a layer-aggregated scoring mechanism to manage unlearning effectiveness dynamically. Extensive experiments demonstrate O3\\u2019s superior balance between unlearning effectiveness and utility preservation, particularly in continuous unlearning scenarios. Compared to state-of-the-art methods, O3 shows promising results in reducing computational costs and maintaining model utility across various tasks, such as question answering and intent classification.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper addresses the LLM unlearning from the continual unlearning perspective.\\nThis unlearning process does not need the retained data.\", \"weaknesses\": \"The proposed LLM unlearn methods LORA and OOD detector does not exactly unlearn the knowledge from the LLMs. They are just like two modules externally mounted outside the LLM and block the input and output of the LLMs related to the unlearn targets.\", \"questions\": \"1. With fine-tuning unlearning methods, why is retained data not acceptable? This directly reflects the paper's motivation.\\n2. this work's novelty seems insufficient, with LORA and OOD blocking the input/output of the LLM.\\n3. If the model did not really forget the data distribution in the proposed method, is this method vulnerable to attack methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Concern about novelty of O3\", \"comment\": \"> Concern about novelty of O3\\n\\nWe appreciate your advice to clarify the novelty of our work. \\n\\n**Problem Novelty**: One of our key contributions to the field of continual unlearning is being the first to explore the underexplored problem of LLM continual unlearning without retained data. In real-world scenarios, retained data is often unavailable or insufficient to effectively train large models, as discussed in the previous response. Furthermore, unlike previous unlearning methods, our approach systematizes the continual unlearning process by addressing diverse challenges, including the continual unlearning of different domain knowledge, fictitious knowledge, and intents, which could facilitate the following research on continual unlearning area with or without retained data.\", \"our_method_contributions_are_novel_in_two_key_aspects\": \"(1) Architecture Design: The overall framework is specifically designed to operate LLM unlearning without relying on retained data, addressing a critical limitation of existing unlearning approaches. (2) Technical Innovations: Our work introduces effective techniques for OOD detection and continual unlearning using LoRA, ensuring robust and efficient performance.\\n\\n**Architecture Design Novelty**: As mentioned previously, to address the challenges of computational costs, utility performance degradation, and, most importantly, the lack of retained data, we propose the O3 framework.\\n\\nOur framework is the first to offer a solution for handling different distributions with specialized modules, integrating the designed OOD detectors and LoRA modules to the unlearning problem. **Owing to the two-branch architecture design, we can leverage OOD to relax the use of retained data, and forward unlearning-irrelevant data to the original LLM without LoRA to prevent utility loss.** This innovative combination achieves superior unlearning performance, particularly in continual unlearning settings, such as in systems dealing with dynamic privacy regulations or evolving user preferences. Furthermore, O3's ability of working without retained data significantly enhances its practicality, especially in sensitive areas like healthcare and finance, where maintaining access to personal or confidential data for utility preservation is not feasible. This feature also extends to scenarios involving specialized tasks with naturally scarce data, such as rare disease diagnosis or niche financial analysis, where retained data availability is inherently limited.\\n\\n\\n\\n**Technical Design Novelty**: As for individual components in our O3 framework, each has significant technical novelty.\\n\\nTo begin with, we noticed that nearly all existing OOD detection works are built upon the classification problem, which is not the mainstream task of language models. Besides, their representation learning and scoring mechanisms rely on semantic category labels that are inaccessible in our problem. Regular contrastive learning is unsuitable for language OOD detection, as it is challenging to achieve semantically equivalent data augmentation for text and has to rely on supervised information more or less. Furthermore, token-level self-supervised learning tasks, such as Masked Language Modeling and SimCSE [7], have proven far less effective in OOD detection. To overcome these challenges, we propose a novel contrastive entropy loss to learn text representations specifically designed for OOD detection. **Our approach leverages random masking to generate the first view and creates the second view by feeding the original text into a maintained key encoder. By using a layer-wise cosine similarity-based softmax probability to weight the optimization, our method achieves significantly faster convergence.**\\n\\n**To address the inaccuracy of OOD detection when the available ID data is limited or biased, we design a global-local-aware scoring mechanism** that combines the Mahalanobis Distance [8] and maximum instance-wise cosine similarity to characterize ID data. The Mahalanobis Distance provides global awareness by approximating the ID data with a global Gaussian distribution, while the maximum instance-wise cosine similarity ensures local awareness by emphasizing instance-level relationships.\\n\\nFinally, we employ orthogonal regularization to facilitate the effectiveness and efficiency of our O3 framework when conducting continual unlearning. In fact, though applying LoRA can significantly reduce the computation and storage overhead during each unlearning operation, the accumulated LoRAs consume substantial resources as the unlearning requests increase. Therefore, we **enable the use of a single LoRA** rather than multiple LoRAs for continual unlearning and propose the **orthogonal regularization to disentangle every unlearning request in the LoRA parameter space, preventing the interference across requests**.\"}" ] }
Es4RPNDtmq
Robust Weight Initialization for Tanh Neural Networks with Fixed Point Analysis
[ "Hyun woo Lee", "Hayoung Choi", "Hyunju Kim" ]
As a neural network's depth increases, it can improve generalization performance. However, training deep networks is challenging due to gradient and signal propagation issues. To address these challenges, extensive theoretical research and various methods have been introduced. Despite these advances, effective weight initialization methods for tanh neural networks remain insufficiently investigated. This paper presents a novel weight initialization method for neural networks with tanh activation function. Based on an analysis of the fixed points of the function $\tanh(ax)$, the proposed method aims to determine values of $a$ that mitigate activation saturation. A series of experiments on various classification datasets and physics-informed neural networks demonstrates that the proposed method outperforms Xavier initialization methods (with or without normalization) in terms of robustness across different network sizes, data efficiency, and convergence speed. Code is available at https://github.com/1HyunwooLee/Tanh-Init.
[ "Weight initialization", "Signal propagation", "Physics informed neural networks" ]
Accept (Poster)
https://openreview.net/pdf?id=Es4RPNDtmq
https://openreview.net/forum?id=Es4RPNDtmq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "znaerC1oow", "zRDEiYIjMf", "xOAFJHpWIQ", "wrOBlNl70c", "wWUMJcY2he", "rfz7jXD91Q", "pt0SjHIR5f", "nekzdQh2qk", "n1mdINm3nN", "k7uUFWKHmg", "jWa8hgwgX0", "j4KspFkRkd", "hkA69yMWz6", "gbebJlaiRs", "cbDCt6VOHu", "cHn4sT0oel", "Y6qd6FRhh3", "XanR2TW70Z", "UkhPAXDkag", "UWmW8jzRdq", "TbUbEf5E7I", "TJpi2kF7yU", "QwIEiteiBF", "Qc50qazs92", "KwsHb6u0cv", "EOLE8TOuCP", "C1ulmNywRQ", "2tNdKu6A84", "06qHtRHL7v" ], "note_type": [ "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732403030952, 1729820121208, 1732618126291, 1737524258825, 1732775539561, 1732412878133, 1732417796994, 1732552795794, 1732549070342, 1732541866341, 1732550410559, 1732409600830, 1732602001921, 1732841981711, 1732703828257, 1730213072568, 1732553805614, 1732620370766, 1732400460123, 1732850270321, 1734433394161, 1732736739036, 1732552841126, 1729843842556, 1733203164888, 1732391418406, 1732713795003, 1732615063300, 1730191170857 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13420/Authors" ], [ "ICLR.cc/2025/Conference/Submission13420/Reviewer_YCAx" ], [ "ICLR.cc/2025/Conference/Submission13420/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13420/Reviewer_6yoV" ], [ "ICLR.cc/2025/Conference/Submission13420/Authors" ], [ "ICLR.cc/2025/Conference/Submission13420/Authors" ], [ "ICLR.cc/2025/Conference/Submission13420/Authors" ], [ "ICLR.cc/2025/Conference/Submission13420/Authors" ], [ "ICLR.cc/2025/Conference/Submission13420/Area_Chair_W4E6" ], [ "ICLR.cc/2025/Conference/Submission13420/Authors" ], [ "ICLR.cc/2025/Conference/Submission13420/Authors" ], [ "ICLR.cc/2025/Conference/Submission13420/Reviewer_79Su" ], [ "ICLR.cc/2025/Conference/Submission13420/Authors" ], [ "ICLR.cc/2025/Conference/Submission13420/Reviewer_6yoV" ], [ "ICLR.cc/2025/Conference/Submission13420/Reviewer_79Su" ], [ "ICLR.cc/2025/Conference/Submission13420/Authors" ], [ "ICLR.cc/2025/Conference/Submission13420/Authors" ], [ "ICLR.cc/2025/Conference/Submission13420/Authors" ], [ "ICLR.cc/2025/Conference/Submission13420/Authors" ], [ "ICLR.cc/2025/Conference/Submission13420/Area_Chair_W4E6" ], [ "ICLR.cc/2025/Conference/Submission13420/Authors" ], [ "ICLR.cc/2025/Conference/Submission13420/Authors" ], [ "ICLR.cc/2025/Conference/Submission13420/Reviewer_kh6z" ], [ "ICLR.cc/2025/Conference/Submission13420/Authors" ], [ "ICLR.cc/2025/Conference/Submission13420/Authors" ], [ "ICLR.cc/2025/Conference/Submission13420/Reviewer_6yoV" ], [ "ICLR.cc/2025/Conference/Submission13420/Reviewer_YCAx" ], [ "ICLR.cc/2025/Conference/Submission13420/Reviewer_6yoV" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your valuable comments! In the following section, we will address the weaknesses (W) and questions (Q) mentioned above. **The changes are marked in blue.**\\n\\n**W1** *\\\"The impact depended on exclusively using tanh as an activation function is fundamentally beneficial in PINNs. As the current state of the paper, there is not enough support for this.\\\"*:\", \"a1\": \"Thank you for your insightful suggestion. The tanh activation function has been experimentally shown to perform well in PINNs, which is why tanh neural networks remain widely used [1,2,3,4]. In response to the reviewers' comments, we have revised the manuscript as follows.\\n- We have presented the absolute error between the exact solution and the PINN-predicted solution for **different activation functions in Figure 14**. Tanh, sigmoid, swish, and ReLU activation functions were compared, with tanh showing the lowest absolute error. \\n\\n**Q1** *\\\"In sections 4.1 and 4.2, the experiment trains for 20 or 40 epochs. Do networks converge to their best accuracy? What is the difference in accuracy when training for more epochs?\\\"*:\", \"a2\": \"Thank you for your valuable suggestion. In response to the reviewer\\u2019s comment, we conducted the following additional experiment:\\n- We trained models on **MNIST, FMNIST, CIFAR-10, and CIFAR-100 datasets for up to 100 epochs in Figure 4**. For the MNIST and FMNIST datasets, all four initialization methods showed convergence after 40 epochs. Notably, Xavier with normalization showed faster convergence compared to Xavier without normalization. In contrast, for CIFAR-10 and CIFAR-100, all methods exhibited overfitting tendencies. Across all four datasets, the proposed method achieved the highest accuracy.\\n\\n**W2&Q2** *\\\"Experiment section only considers a few data sets or PDEs\\\"*:\", \"a3\": [\"Thank you for your valuable suggestion. In response to the reviewers' comments, additional experiments were conducted, and the manuscript has been revised as follows.\", \"Normalization methods were proposed to address gradient issues in deep neural networks. Therefore, we conducted experiments by applying Batch Normalization or Layer Normalization to Xavier initialization, as shown in **Tables 3 and 4 and Figure 4**. The results demonstrate that the proposed method, without normalization, achieves faster convergence, improved data efficiency, and is more robust to variations in network size compared to existing methods.\", \"We have included **CIFAR-100 data in Tables 1, 2, and Figure 12**, added results for **networks with three hidden layers**, and provided PINN results for the **Diffusion and Poisson equations in Table 4**.\", \"We conducted experiments in **Figure 6, 7,16 and Table 3** to evaluate whether the proposed method enables efficient learning with limited data. The results experimentally demonstrate that the proposed method is more data-efficient compared to other methods.\", \"We conducted additional experiments, shown in **Figure 13 (a)**, to compare it with He initialization and Orthogonal initialization in ReLU neural networks.\", \"Motivated by the improved performance of the proposed method in networks with significant variations in the number of nodes across layers, we further conducted experiments on **autoencoders, as presented in Figures 13 (b) and (c)**. These results demonstrate the applicability of the proposed method in such architectures.\", \"**Q3** *\\\"Can the code used in the experiment can be provided to improve reproducibility?\\\"*:\"], \"a4\": \"Thank you for this valuable suggestion. **Example code** implementing the proposed weight initialization method has been included in the Supplementary Material for reproducibility.\\n\\nWe hope that these explanations address your concerns, but we'd be happy to answer any remaining questions about our method. \\n\\n---\\n[1] Karniadakis, George Em, et al. \\\"Physics-informed machine learning.\\\" Nature Reviews Physics 3.6 (2021): 422-440. \\n\\n[2] Rathore, Pratik, et al. \\\"Challenges in training PINNs: A loss landscape perspective.\\\" ICML 2024. \\n\\n[3] Gnanasambandam, Raghav, et al. \\\"Self-scalable tanh (stan): Multi-scale solutions for physics-informed neural networks.\\\" TPAMI (20023).\\n\\n[4] Raissi, Maziar, Paris Perdikaris, and George E. Karniadakis. \\\"Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.\\\" \\nJournal of Computational physics 378 (2019): 686-707.\"}", "{\"summary\": \"The paper proposes a new method to initialize weights for FCNNs and PINNs with tanh activation. The paper claims that the proposed weight initialization method will not lead to diminishing activations for very deep networks unlike Xavier weight initialization. The paper claims that the proposed weight initialization is robust to network depth and number of units in hidden leayers.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Originality\\nThe paper presents a novel weight initialization method specifically designed for tanh-based neural networks, addressing an understudied area in neural network initialization. This approach is distinct in its use of fixed-point analysis to prevent activation saturation and improve training robustness across network sizes, particularly in the context of Physics-Informed Neural Networks (PINNs). By emphasizing robustness and performance consistency in both traditional classification tasks and PINNs, the paper makes a valuable contribution to the field of weight initialization. The originality is strong, given the lack of prior research focusing on tanh-based initialization methods.\\n\\nQuality\\nThe paper demonstrates high quality in both theoretical and experimental aspects. The method is grounded in rigorous mathematical analysis, leveraging fixed-point theory to derive conditions that ensure stable activation propagation. The provided lemmas, proofs, and propositions add credibility and depth to the approach.\\nExperiments are well-designed and span various network configurations, datasets (MNIST, Fashion MNIST, CIFAR-10), and applications (PINNs for solving differential equations). The results consistently show that the proposed method outperforms Xavier initialization, particularly in deeper networks and varying network sizes.\", \"clarity\": \"Overall the paper is very well written, with some exceptions mentioned in the weakness section. All the sections in the paper are laid out clearly. The notations are consistent across the paper.\", \"weaknesses\": \"Significance:\\nThe paper compares their proposed weight initialization with Xavier weight initialization for FCNN with tanh activation. Xavier is known to show diminishing gradients and activations problem for deeper networks, but this is solved by using layer normalization. I am therefore considering this work not significant because the problem that the authors are trying to solve does not exist for Xavier + Layer norm and the authors did not do any comparative analysis with and without layer norm.\", \"other_major_issue\": \"1. In equation 2, the paper claims that a_i^(k+1) follows normal distribution with unit mean. But when number of neurons in layer l-1 (N_(l-1)) is greater than number of neurons in layer l (N_l), then the mean will be greater than 1. When N_(l-1) = 2 * N_l, the mean will be 2. This is not clearly discussed in the paper. If the mean is > 1, then that leads to tanh always saturated.\", \"questions\": \"1. Layer norm is added to handle the diminishing activation problem. Any reason why you did not compare the performance of the proposed approach with Xavier weight initialization with Layer norm?\\n\\n2. In equation 2, the paper claims that a_i^(k+1) follows normal distribution with unit mean. But when number of neurons in layer l-1 (N_(l-1)) is greater than number of neurons in layer l (N_l), then the mean can be greater than 1. When N_(l-1) = 2 * N_l, the mean will be 2. This is not clearly discussed in the paper. If the mean is > 1, then that can lead to tanh saturation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer 79su,\\n\\nThank you for your insightful and encouraging feedback. We sincerely appreciate the time and effort you dedicated to reviewing our work and for providing valuable suggestions that greatly contributed to improving the quality of our research. We are also deeply grateful for your recognition of our efforts and for revising your rating to an 8.\\n\\nBest regards,\\n\\nAuthors of submission 13420\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"The reviewer acknowledges the data efficiency in PINNs adds to the soundness of the paper.\"}", "{\"comment\": \"Thank you for your valuable comments! In the following section, we will address the weaknesses (W) and questions (Q) mentioned above. **The changes are marked in blue.**\\n\\n**W1&Q2** *\\\"In equation 2, the paper claims that $a_i^{k+1}$ follows normal distribution with unit mean. But when number of neurons in layer l-1 (N_(l-1)) is greater than number of neurons in layer l (N_l), then the mean will be greater than 1. When N_(l-1) = 2 * N_l, the mean will be 2. This is not clearly discussed in the paper. If the mean is > 1, then that leads to tanh always saturated.\\\"*:\", \"a1\": \"Thank you for your valuable comment. We propose a weight matrix defined as $\\\\mathbf{W}^{\\\\ell} = \\\\mathbf{D}^{\\\\ell} + \\\\mathbf{Z}^{\\\\ell} \\\\in \\\\mathbb{R}^{N_{\\\\ell} \\\\times N_{\\\\ell-1}}$, where $\\\\mathbf{D_{i,j}}^{\\\\ell} = 1$ if $i \\\\equiv j \\\\pmod{N_{\\\\ell-1}}$, and 0 otherwise. If $\\\\mathbf{D_{i,j}}^{\\\\ell} = 1$ when $i \\\\equiv j \\\\pmod{N_{\\\\ell}}$, and 0 otherwise, the mean in the mentioned case could become 2. However, in the proposed method, all off-diagonal elements of $\\\\mathbf{D}^{\\\\ell}$ are 0 when $N_{\\\\ell-1} > N_{\\\\ell}$, ensuring that the mean remains 1. We have revised the manuscript as follows to improve clarity:\\n- **Inline Headings were added** to each paragraph for better organization. \\n- In Section 3.2, we included a **remark outlining two conditions** that the proposed weight initialization is designed to satisfy. \\n\\n**Q1** *\\\"Layer norm is added to handle the diminishing activation problem. Any reason why you did not compare the performance of the proposed approach with Xavier weight initialization with Layer norm?\\\"*:\", \"a2\": [\"Thank you for your valuable suggestion. To the best of our knowledge, in neural networks using the tanh activation function, normalization methods are less effective due to the inherent properties of tanh. Tanh suffers from gradient saturation for large inputs, where gradients approach zero, and normalization cannot effectively mitigate this issue. Additionally, normalization methods generally introduce a computational overhead of approximately 30% and require additional time for tuning, such as deciding how frequently to apply them across layers. In response to the reviewers' comments, we conducted experiments to compare the proposed method with Xavier initialization, both with and without normalization methods (Batch Normalization and Layer Normalization), and revised the paper as follows:\", \"**In Figure 4**, we validated Xavier, Xavier with BN, Xavier with LN, and the proposed method on MNIST, FMNIST, CIFAR-10, and CIFAR-100 datasets. The proposed method demonstrated the fastest convergence and the highest accuracy across all datasets.\", \"**In Table 3**, we evaluated the **data efficiency** of the four methods on MNIST and FMNIST. The results show that the proposed method achieved higher accuracy even with limited data.\", \"**In Table 4**, we tested the four methods on **the Allen-Cahn, Burgers, Diffusion, and Poisson equations** across various network sizes. The proposed method exhibited the greatest robustness to network size.\", \"We hope that these explanations address your concerns, but we'd be happy to answer any remaining questions about our method.\"]}", "{\"title\": \"Summary of Revisions\", \"comment\": \"Dear Reviewers and AC,\\n\\nWe would like to thank all the reviewers for taking the time to review our work and for providing valuable feedback. \\nWe appreciate the recognition from reviewers of clear and good presentation **(Reviewers 79Su, 6yoV, kh6z, YCAx)**, improved performance across various tasks **(Reviewers 79Su, 6yoV, YCAx)**, the theoretical foundation of the proposed method **(Reviewers 6yoV, kh6z, YCAx)**, and the novelty of our method **(Reviewers 79Su, kh6z, YCAx)**.\\n***\\nThe latest revision of our paper has been uploaded, addressing all comments and queries raised by the reviewers. Edits in the PDF have been highlighted in red. Below, we provide a summary of the changes made to our work.\\n\\n# Writing \\n**Improvements** Thanks for the suggestions of Reviewers YCAx, and kh6z\\n- We have made **minor edits** for clarity and space efficiency.\\n- We have added **additional explanations in Section 3.1** for further clarification. \\n- We have added **inline headings** to each paragraph for improved organization.\\n\\n# Experiments \\n**Normalization Methods** Thanks for the suggestions of Reviewer YCAx\\n- We have added experiments comparing **Xavier with normalization** and the proposed method on classification datasets in **Figure 4 and Table 3**.\\n- We have added an experiment comparing **Xavier with normalization** and the proposed method on PDEs in **Table 4**.\\n\\n**Datasets Efficiency** Thanks for the suggestions of Reviewer 79Su.\\n- We have added experiments verifying **data efficiency** on classification datasets in **Table 3**.\\n- We have added experiments verifying **data efficiency** on PDEs in **Figure 6, 7 and 16**.\\n\\n**Datasets and PDEs** Thanks for the suggestions of Reviewer 6yoV, kh6z, and 79Su.\\n- We have added **CIFAR-100 data** to the existing experiments in **Tables 1 and 2**.\\n- We have added **the diffusion equation and Poisson equation** to the existing experiments in **Table 4**.\\n- We have added experiments on the **activation value distribution** in deeper layers in **Figures 3 and 8**. \\n\\n**Supplementary Experiments** Thanks for the suggestions of Reviewers 79Su, YCAx, 6yoV, and kh6z.\\n- We have added experiments on classification datasets between the proposed method for tanh and **He/orthogonal initialization for ReLU in Figure 13**. \\n- We have added an experiment on the **autoencoder in Figure 13 (b) and (c)**. \\n- We have added a performance comparison experiment of **PINNs based on various activation functions in Figure 10**. \\n- We have added an experiment on **PINNs with Swish** activation function in **Figure 18**.\\n\\nWe have included all experimental results in our revised paper.\\n\\nBest regards,\\n\\nAuthors of submission 13420\"}", "{\"comment\": \"**Q1&Q2** *\\\"What is \\ud835\\udefc \\\"*:\", \"a6\": [\"We propose a weight initialization method that satisfies the following condition during the initial forward pass: it ensures that the distribution of activation values in deeper layers is approximately normal. As shown in Figures 3 and 8, when $\\\\sigma_z = 0.015$, the activation value distribution in the 1000th layer is observed to be approximately normal distribution. In this experiment, all hidden layers were set to have 32 nodes, satisfying $0.015 = \\\\alpha / \\\\sqrt{32}$. The value of \\ud835\\udefc is approximately 0.085. In response to the reviewers' comments, we have revised the manuscript.\", \"We have added additional **details about \\ud835\\udefc in the final paragraph of Section 3.3**.\", \"We have specified the conditions that the initial weight matrix should satisfy in the **remark of Section 3.2**.\", \"We have added **experiments on the changes in activation distribution with respect to $\\\\sigma_z$ in Figures 3 and 8**.\", \"**Q3** *\\\"In Figure 3 (c) and (d), the proposed method seems to decrease after 6 epochs. Although the accuracy curve can rapidly reach the peak (faster than Xavier), the robustness of this method also should be discussed. For example, the initialization method can first provide prior knowledge to neural networks, but if it can keep the stability of training or not. Or is it the reason for the high \\ud835\\udefc? \\\"*:\"], \"a7\": \"Thank you for this valuable suggestion. In response to the reviewers' comments, additional experiments were conducted:\\n- We **trained models on the MNIST, FMNIST, CIFAR-10, and CIFAR-100 datasets for up to 100 epochs in Figure 4**. For the MNIST and FMNIST datasets, all four initialization methods converged after 40 epochs. However, for CIFAR-10 and CIFAR-100, all four initialization methods reached a peak in accuracy at around 10 epochs, followed by a rapid decline. The proposed method, as shown in Figures 4 (c) and (d), reached the peak faster and exhibited a slower decline in accuracy compared to the other methods. These results demonstrate that the proposed method achieves both rapid convergence and improved training stability compared to existing methods.\\n- The proposed value of $\\\\alpha=0.085$, as shown in **Figure 3 and 8, prevents activation values from saturating even at the 1000th layer** and maintains a consistent scale for activation values, as illustrated in Figure 1. This ensures that signals are not lost in deeper layers.\\n\\n\\n\\n\\n**Q4** \\\"In Appendix A.1, the authors discussed different conditions. When x= 0, whether the vanishment problem will abscond. Please highlight the strategy on how this method can process it.\\\":\", \"a8\": \"Thank you for pointing out the need for further clarification of the strategy in Appendix A.1. In Appendix A.1, the proof approaches two cases:\\n1. When $\\\\alpha \\\\leq 1$, the fixed point $x^*=0$ is unique, and the activation values tend to shrink toward 0 as the network depth increases. This highlights the challenge of the vanishing activation problem.\\n2. When $\\\\alpha > 1$, two fixed points $x^*=\\\\pm\\\\xi_a$ emerge, which prevent the activations from collapsing to zero. \\n\\nThe key strategy of our method lies in ensuring that the initialization keeps $a$ close to 1 across the network, thus avoiding the vanishment of activations. If $x_i=0$ at initialization and all other elements of vector $x$ are non-zero, the proposed weight initialization ensures that subsequent activations will move away from zero. In response to the reviewers' comments, we have revised the manuscript as follows.\\n- We have added **additional explanations in Section 3.1** for further clarification.\\n\\nWe hope that these explanations address your concerns, but we'd be happy to answer any remaining questions about our method.\"}", "{\"title\": \"Reminder for Reviewer 79Su\", \"comment\": \"Dear Reviewer 79su,\\n\\nWe hope that our responses could adequately address your concerns. \\nAs the discussion phase deadline approaches, we warmly welcome further discussion regarding any additional concerns that you may have, and we sincerely hope you can reconsider the rating accordingly.\\n\\nThank you for the time and appreciation that you have dedicated to our work. \\n\\nBest regards,\\n\\nAuthors of submission 13420\"}", "{\"comment\": \"Dear reviewers,\\n\\nA reminder that **November, 26** is the last day to interact with the authors, before the private discussion with the area chairs. At the very least, please acknowledge having read the rebuttal (if present). If the rebuttal was satisfying, please improve your score accordingly. Finally, if you have concerns that might be solved in time, this is the last chance before moving on to the next phase.\\n\\nThanks,\\nThe AC\"}", "{\"title\": \"Reminder for Reviewer 6yoV\", \"comment\": \"Dear Reviewer 6yoV,\\n\\nWe hope that our responses could adequately address your concerns. As the discussion phase deadline approaches, we warmly welcome further discussion regarding any additional concerns that you may have, and we sincerely hope you can reconsider the rating accordingly.\\n\\nThank you for the time and appreciation that you have dedicated to our work.\\n\\nBest regards,\\n\\nAuthors of submission 13420\"}", "{\"comment\": \"Thank you for your valuable comments! In the following section, we will address the weaknesses (W) and questions (Q) mentioned above. **The changes are marked in blue.**\\n\\n**W1** *\\\"The universality of this method should be improved. For example, more experiments on the feasibility of other neural networks except FNN and PINN.\\\"*:\", \"a1\": \"Thank you for your valuable suggestion. We derived the initialization by simplifying the process of signal propagation in feedforward neural networks, making it particularly effective for architectures using tanh FFNNs. One such example is Physics-Informed Neural Networks (PINNs). In response to the reviewers' comments, we have added new experiments as follows:\\n- We have added **experiments on autoencoders in Figure 13 (b) and (c)** to compare the performance of four methods: (1) tanh activation with Xavier initialization, (2) tanh activation with the proposed initialization, (3) ReLU activation with He initialization and Batch Normalization (BN), and (4) ReLU activation with orthogonal initialization. The results show that the proposed method achieves the fastest convergence and the lowest validation loss among all methods.\\n\\n**W2** *\\\"The details of hyperparameters should be mentioned, such as what is the threshold of FFNN, which training strategy (supervised or unsupervised) of FFNN is used?\\\"*:\", \"a2\": \"Thank you for your comment. In response to the reviewers' comments, we have revised the manuscript as follows:\\n- We have specified the experimental settings as inline headings in Section 4 and revised them in enhanced detail.\\n\\n**W3** *\\\"FFNN is specifically designed to visualize the trained features, which also should be discussed.\\\"*\", \"a3\": \"Thank you for your comment. We observed changes in the weight matrix during the training of the FFNN. Additionally, we analyzed the changes in the rank, eigenvalues and eigenvectors of this matrix, which provided valuable insights into the training process of the FFNN. However, as the primary focus of this paper is on effectively propagating signals through deep layers, we chose not to visualize the weight matrix to maintain the coherence of the manuscript.\\n\\n**W4** *\\\"The consistency of results should be guaranteed. For example, in Figure 3 (c) and (d), there is a different trend (Xavier tends to equal the proposed method), which also should be discussed. In case that after 20 epochs, the performance would be totally different to the presented result.\\\"*:\", \"a4\": \"Thank you for your valuable suggestion. In response to the reviewers' comments, we have added new experiments:\\n- We have added experiments with **100 epochs of training on MNIST, FMNIST, CIFAR-10, and CIFAR-100 in Figure 4**.\\n The proposed method achieved the highest accuracy and the fastest convergence across all four datasets.\\n\\n**W5** *\\\" The writing and structure should be revised again\\\"*:\", \"a5\": [\"Thank you for your comment. In response to the reviewers' comments, we have revised the manuscript as follows:\", \"We have made minor edits for **clarity** and **space efficiency**.\", \"We have added **inline headings** to each paragraph for improved organization.\", \"We have added **explanations in Section 3.1** to enhance understanding.\", \"We have specified the conditions that the initial weight matrix should satisfy in the **remark of Section 3.2**.\"]}", "{\"comment\": \"Thank you for submitting your revised paper. I appreciate the effort you've put into addressing the comments and improving the results. The updated work demonstrates significant progress, and I am impressed by the improvements. Congratulations on your efforts. Based on this revision, I am revising my rating to an 8. Keep up the great work, and I look forward to seeing how this research evolves.\"}", "{\"comment\": \"We sincerely thank you for your thorough reviews and for your appreciation of our work.\"}", "{\"title\": \"Thank you\", \"comment\": \"I thank you for all your work in writing up the rebuttal and do apologize for the late response on my side. I am reading through the rebuttal and will provide some feedback soon.\"}", "{\"summary\": \"The paper introduces a novel weight initialization technique specifically designed for neural networks using the tanh activation function. This technique is evaluated against the well-known Xavier initialization method using benchmark datasets. The experimental results demonstrate that the proposed initialization method enhances the convergence speed of Physics-Informed Neural Networks (PINNs) utilizing the tanh function, showing greater robustness to variations in network size. The findings indicate that the new initialization technique outperforms Xavier initialization in solving various problems related to Partial Differential Equations (PDEs).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The strength of this paper lies in the development of a novel weight initialization technique that facilitates faster convergence and enhances performance in physics-informed neural networks (PINNs) utilizing the tanh activation function.\", \"weaknesses\": \"1. The comparison of the proposed weight initialization technique solely with Xavier is insufficient; it should also be experimentally evaluated against other state-of-the-art weight initialization methods.\\n2. Using tanh activation function in entire neural network is not good practice that it has the drawback of the vanishing gradients for the very high and very low values of x.\\n3. The formulation is more complex than standard methods, which could complicate implementation as shown in Equation (1). \\n4. The optimal value of \\ud835\\udefc can be highly context-dependent, varying across different architectures, datasets, and tasks, which makes it less universally applicable. Additionally, the choice of \\ud835\\udefc can interact with other hyperparameters, such as learning rate and batch size, complicating the overall tuning process during backpropagation, as described in Equation (2).\\n5. In Section 4.1, the evaluation process utilizes three datasets\\u2014MNIST, FMNIST, and CIFAR-10\\u2014employing the tanh activation function in every layer. As shown in Table 2, as the number of hidden layers increases, loss gradually increases, which is indicative of overfitting. It would be more effective to use the proposed weight initialization in conjunction with state-of-the-art architectures for training deep neural networks.\", \"questions\": \"See above in weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reminder for Review YCAx\", \"comment\": \"Dear Reviewer YCAx,\\n\\nWe hope that our responses could adequately address your concerns. As the discussion phase deadline approaches, we warmly welcome further discussion regarding any additional concerns that you may have, and we sincerely hope you can reconsider the rating accordingly.\\n\\nThank you for the time and appreciation that you have dedicated to our work.\\n\\nBest regards,\\n\\nAuthors of submission 13420\"}", "{\"comment\": \"**Regarding W1&Q2**\\n\\n**1.** *\\\"I don't see where this is mentioned in the paper \\\"all off-diagonal elements of $D_{i,j}^l$ are 0, ensuring that the mean remains 1.\\\"*:\", \"a1\": \"Thank you for your comment. In the revised paper (lines 255--256), the proposed weight initialization method defines $D_{i,j}^{\\\\ell} = 1$ if $i \\\\equiv j \\\\pmod{N_{\\\\ell -1}}$, and 0 otherwise. From this definition of $D^{\\\\ell}$, it can be inferred that when $N_{\\\\ell-1} > N_{\\\\ell}$, all off-diagonal elements of $D^{\\\\ell}$ are 0.\\n\\nTo clarify, the condition $i \\\\equiv j \\\\pmod{N_{\\\\ell-1}}$ implies that $j = i + k \\\\cdot N_{\\\\ell-1}$, where $k \\\\in \\\\mathbb{Z}$. Additionally, $j$ must lie within the valid range of indices $1 \\\\leq j \\\\leq N_{\\\\ell-1}$. This restricts the possible values of $j$ for a given $i$. When $N_{\\\\ell-1} > N_{\\\\ell}$, the only valid $j$ that satisfies $i \\\\equiv j \\\\pmod{N_{\\\\ell-1}}$ and lies within the range $1 \\\\leq j \\\\leq N_{\\\\ell-1}$ is $j = i$. All other values of $j \\\\neq i$ fall outside the valid range due to the modular condition and the size constraints of the indices. Therefore, under the condition $N_{\\\\ell-1} > N_{\\\\ell}$, all off-diagonal elements of $\\\\mathbf{D}^{\\\\ell}$ are zero. In response to the reviewers' comments, we have revised the manuscript for clarification, as follows, instead of providing a formal proof. \\n\\n- We have **added examples** of $D^{\\\\ell}$ for the cases $N_{\\\\ell} < N_{\\\\ell-1}$, $N_{\\\\ell} = N_{\\\\ell-1}$, and $N_{\\\\ell} > N_{\\\\ell-1}$ in **Figure 17**.\\n\\n**2.** *\\\"Also please clarify what do you mean by diagonal in a rectangular matrix.\\\"*:\", \"a2\": \"We apologize for any confusion caused by our previous response. In a rectangular matrix, the diagonal specifically refers to the elements where the row and column indices are equal.\\n\\n**3.** *\\\"What about the case where $N_{l-1}<N_{l}$, the mean will be less than 1 even when off diagonal elements are zero.\\n\\\"*:\", \"a3\": \"Thank you for your valuable comment. To address the case where $N_{\\\\ell-1} < N_{\\\\ell}$, let us provide an example. Assume $D^{\\\\ell}$ is a $4 \\\\times 2$ matrix, and $Z^{\\\\ell}$ is added such that all off-diagonal elements remain 0. The resulting $W^{\\\\ell}$ is as follows:\\n\\n[[1+z_1, 0],\\n\\n[0, 1+z_2],\\n\\n[1+z_3, 0 ],\\n\\n[0, 1+z_4]]\\n\\nwhere $z_1, \\\\dots, z_4$ are drawn from a normal distribution with mean 0 and variance $\\\\sigma_z^2$ by definition of prposed method. For an input vector $x^0 = (x^0_1, x^0_2)$, the output becomes $x^1_1 = \\\\tanh((1+z_1) \\\\cdot x^0_1)$. Therefore, even in this case, the mean remains 1 because the added noise $z_i$ has a mean of 0. This example is applicable to matrices when $N_{l-1}> N_l$. We have revised the manuscript for clarification as follows: \\n\\n- We have **added examples** of $D^{\\\\ell}$ for the cases $N_{\\\\ell} < N_{\\\\ell-1}$, $N_{\\\\ell} = N_{\\\\ell-1}$, and $N_{\\\\ell} > N_{\\\\ell-1}$ in **Figure 17**.\\n\\n**Regarding Normalization**\\n\\nWe sincerely appreciate your valuable suggestions regarding normalization, which have significantly improved our manuscript. We hope the **experiments** provided address your concerns effectively. \\n\\nWe warmly welcome any further discussion or additional feedback you may have and kindly hope you might reconsider your evaluation in light of these improvements. Thank you again for your thoughtful review.\"}", "{\"comment\": \"**W5** *\\\"In Section 4.1, the evaluation process utilizes three datasets\\u2014MNIST, FMNIST, and CIFAR-10\\u2014employing the tanh activation function in every layer. As shown in Table 2, as the number of hidden layers increases, loss gradually increases, which is indicative of overfitting. It would be more effective to use the proposed weight initialization in conjunction with state-of-the-art architectures for training deep neural networks.\\\"*:\", \"a5\": \"Thank you for your thoughtful comments.\\n- In **Table 2**, we conducted additional experiments on the CIFAR-100 dataset with a network containing three layers to better observe the trend of loss variation as the number of layers increases. The results show that the proposed method exhibits less overfitting in deep networks for feature-rich datasets such as CIFAR-10 and CIFAR-100. \\n- We conducted **experiments on autoencoders with Batch Normalization and Dropout applied, as shown in Figure 13 (b) and (c)**. The proposed method demonstrated lower loss compared to other approaches.\\n\\nWe hope that these explanations address your concerns, but we'd be happy to answer any remaining questions about our method. \\n***\\n[1] Karniadakis, George Em, et al. \\\"Physics-informed machine learning.\\\" Nature Reviews Physics 3.6 (2021): 422-440. \\n\\n[2] Rathore, Pratik, et al. \\\"Challenges in training PINNs: A loss landscape perspective.\\\" ICML 2024. \\n\\n[3] Gnanasambandam, Raghav, et al. \\\"Self-scalable tanh (stan): Multi-scale solutions for physics-informed neural networks.\\\" TPAMI (20023).\\n\\n[4] Raissi, Maziar, Paris Perdikaris, and George E. Karniadakis. \\\"Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.\\\" \\nJournal of Computational physics 378 (2019): 686-707.\\n\\n[5] Lu, Lu, et al. \\\"Dying relu and initialization: Theory and numerical examples.\\\" arXiv preprint arXiv:1903.06733 (2019).\"}", "{\"title\": \"Kind Reminder for Reviewer kh6z\", \"comment\": \"Dear reviewer kh6z\\n\\nThe extended discussion closes in few days.\\nWe've tried to address all your concerns with new results, \\nclarifications and an updated manuscript. \\nPlease let us know if you have any remaining concerns. \\nWe look forward to a productive discussion, and we sincerely hope you can reconsider the rating accordingly.\\n\\nBest regards,\\n\\nAuthors of submission 13420\"}", "{\"metareview\": \"The paper proposes a novel initialization method for tanh neural networks, based on a fixed-point analysis of the layer.\\n\\nThe reviews are mixed, ranging from marginal rejection to strong acceptance. The reviewers had several technical questions (see below for a detail), but these were addressed in the rebuttal. They were also concerned about some missing baselines (e.g., other initialization techniques, layer normalization), which the authors added in the rebuttal. The only remaining concern is the limited scope of the paper, which focuses on a narrow type of neural network.\\n\\nOverall, the paper is technically correct; the authors provided a very significant rebuttal which was appreciated by all reviewers who interacted during the rebuttal phase. In addition, they argue correctly that tanh models are still important in several sub-fields, such as PINNs. As a result, I lean towards acceptance.\", \"additional_comments_on_reviewer_discussion\": [\"**Reviewer YCAx** had concerns on the mathematical analysis, and on the lack of experiments with layer normalization. These were addressed in the rebuttal.\", \"**Reviewer kh6z** had some questions on a few points (e.g., hyperparameters). However, the questions were rather vague, and the reviewer did not answer to my requests for clarification. They also ignored the author's rebuttal. As a result, I ignored the review in my final evaluation.\", \"**Reviewer 79Su** was originally negative, with concerns on the scope of the work (which is limited to tanh models), hyper-parameters, and the choice of baselines. However, the authors provided a significant rebuttal, and the reviewer now fully recommend acceptance. This was the most significant review for my final evaluation.\", \"**Reviewer 6yoV** had some minor concerns that were addressed in the rebuttal.\"]}", "{\"comment\": \"Thank you for your valuable feedback! We sincerely appreciate the time and effort you dedicated to reviewing our work.\\n\\n**W1** *\\\"The proposed method has no significant improvement in classification tasks.\\\"*\", \"a1\": \"The proposed initialization method is designed to effectively propagate input signals to deeper layers during the initial forward pass. Effective signal propagation ensures that different input signals remain distinct in deeper layers. While the performance difference compared to existing initialization methods may not be significant in classification tasks, the proposed method demonstrates **robustness to network size (Table 1, 2, and 4)** and **high data efficiency (Table 3, Figures 6, 7, and 16)**. We particularly highlight the data efficiency of our method as a notable advantage.\\n\\n**Q1** *Other activation functions.*\", \"a2\": \"Thank you for your insightful comment! We acknowledge that PINNs use a variety of activation functions depending on the specific PDE problem, including:\\n- PINNs with tanh activation function [1,2,3,4,5]\\n- PINNs with swish activation function [6,7]\\n- PINNs with locally adaptive activation functions [8]\\n- PINNs with self-scalable tanh activation functions [9]\\n\\nDespite the diversity of activation functions, all the references above employ **Xavier initialization**. It is well-known that the effectiveness of an initialization method depends on the activation function [10, 11]. Xavier initialization was originally designed for networks using tanh activation function but is often applied to networks with other activation functions. Proposed method was also designed for tanh networks, with the hypothesis that an initialization tailored for tanh would also perform well for other smooth activation functions. However, this remains a hypothesis. \\nUnlike Xavier initialization, which is theoretically designed based on maintaining linearity at $x=0$ for activation functions, the proposed method is grounded in the fixed points of the tanh function, making it more dependent on the characteristics of tanh activation. \\n\\nIn response to the reviewer's comment, we have revised the manuscript as follows:\\n- We added experiments in **Figure 14 involving swish, elu and diffusion equation**.\\n- We added experimental results in **Figure 18**, comparing Xavier initialization and the proposed method for **PINNs with swish**.\\n\\nWe hope that this response addresses your concerns, but we'd be happy to answer any remaining questions about our method. Thank you again for your thoughtful review.\\n\\n***\\n[1]Jin, Xiaowei, et al. \\\"NSFnets (Navier-Stokes flow nets): Physics-informed neural networks for the incompressible Navier-Stokes equations.\\\" \\nJournal of Computational Physics 426 (2021): 109951.\\n\\n[2] Rathore, Pratik, et al. \\\"Challenges in training PINNs: A loss landscape perspective.\\\" arXiv preprint arXiv:2402.01868 (2024). \\n\\n[3] Son, Hwijae, Sung Woong Cho, and Hyung Ju Hwang. \\\"Enhanced physics-informed neural networks with augmented Lagrangian relaxation \\nmethod (AL-PINNs).\\\" Neurocomputing 548 (2023): 126424.\\n\\n[4] Yao, Jiachen, et al. \\\"Multiadam: Parameter-wise scale-invariant optimizer for multiscale training of physics-informed neural networks.\\\" \\nInternational Conference on Machine Learning. PMLR, 2023.\\n\\n[5] Song, Yanjie, et al. \\\"Loss-attentional physics-informed neural networks.\\\" Journal of Computational Physics 501 (2024): 112781.\\n\\n[6] Wang, Hongping, Yi Liu, and Shizhao Wang. \\\"Dense velocity reconstruction from particle image \\nvelocimetry/particle tracking velocimetry using a physics-informed neural network.\\\" Physics of fluids 34.1 (2022).\\n\\n[7] Wang, Sifan, et al. \\\"PirateNets: Physics-informed Deep Learning with Residual Adaptive Networks.\\\" arXiv preprint arXiv:2402.00326 (2024).\\n\\n[8] Cai, Shengze, et al. \\\"Physics-informed neural networks for heat transfer problems.\\\" Journal of Heat Transfer 143.6 (2021): 060801.\\n\\n[9] Gnanasambandam, Raghav, et al. \\\"Self-scalable tanh (stan): Multi-scale solutions for physics-informed neural networks.\\\" \\nIEEE Transactions on Pattern Analysis and Machine Intelligence (2023).\\n\\n\\n[10] He, Kaiming, et al. \\\"Deep residual learning for image recognition.\\\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.\\n\\n[11] Glorot, Xavier, and Yoshua Bengio. \\\"Understanding the difficulty of training deep feedforward neural networks.\\\" Proceedings of the thirteenth international \\nconference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 2010.\"}", "{\"title\": \"Reminder for Review kh6z\", \"comment\": \"Dear Reviewer kh6z,\\n\\nWe hope that our responses could adequately address your concerns. As the discussion phase deadline approaches, we warmly welcome further discussion regarding any additional concerns that you may have, and we sincerely hope you can reconsider the rating accordingly.\\n\\nThank you for the time and appreciation that you have dedicated to our work.\\n\\nBest regards,\\n\\nAuthors of submission 13420\"}", "{\"summary\": \"Weight initialisation is an old topic, and most studies have verified that weight initialization methods can improve the performance of neural networks. However, because that neural network\\u2019s depth increasing rapidly, most neural networks, especially Feedforward Neural Networks (FFNNs) should face the gradient vanishment problem. In this article, authors proposed one weight initialisation method for FFNNs and Physics-Informed Neural Networks (PINNs). Based on an analysis of the fixed points of the function tanh(ax), this method determines values of $a$ that prevent the saturation of activations during the training progress. In terms of robustness, this method presents a stronger and more efficient performance. In the experiment, verified on MNIST, Fashion MNIST, and CIFAR10 datasets, this method also shows acceptable results compared to the Xavier.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. this article proposes one novel weight initialization method for the FFNN and PINN.\\n2. the authors prove that the activation values cannot vanish when increasing the depth of the neural network by using a fixed-point analysis.\", \"weaknesses\": \"1. The universality of this method should be improved. For example, more experiments on the feasibility of other neural networks except FNN and PINN.\\n2. The details of hyperparameters should be mentioned, such as what is the $Threshold$ of FFNN, which training strategy (supervised or unsupervised) of FFNN is used? \\n3. FFNN is specifically designed to visualize the trained features, which also should be discussed.\\n4. The consistency of results should be guaranteed. For example, in Figure 3 (c) and (d), there is a different trend (Xavier tends to equal \\nthe proposed method), which also should be discussed. In case that after 20 epochs, the performance would be totally different to the presented result.\", \"questions\": \"The novelty of this work is strong, and the topic sounds interesting. However, the writing and structure should be revised again. There are some questions that the authors should be concerned about.\\n\\n1. In Eq. 2, $\\\\sigma_{z}$ is set to $\\\\alpha/\\\\sqrt{N^{l}-1}$ and $\\\\alpha = 0.085$. From Figure 2, we can find that the optimal value of $\\\\alpha$ is $0.085$. Is there any theoretical reason why $\\\\alpha$ should be $0.085$. Or should we manually try the value accordingly?\\n\\n2. Additionally, please scribe what is $\\\\alpha$. There is no definition of $\\\\alpha$.\\n\\n3. In Figure 3 (c) and (d), the proposed method seems to decrease after 6 epochs. Although the accuracy curve can rapidly reach the peak (faster than Xavier), the robustness of this method also should be discussed. For example, the initialization method can first provide prior knowledge to neural networks, but if it can keep the stability of training or not. Or is it the reason for the high $\\\\alpha$?\\n\\n4. In Appendix A.1, the authors discussed different conditions. When $x = 0$, whether the vanishment problem will abscond. Please highlight the strategy on how this method can process it.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"[Reminder] Could you kindly verify if the provided clarification addresses your concerns?\", \"comment\": \"Dear Reviewer kh6z,\\n\\nWe believe we have carefully and comprehensively addressed all your concerns and questions.\\nAs the discussion period is set to close in less than 7 hours, we would be grateful for any additional suggestions or specific points to further enhance our manuscript.\\n\\nWe sincerely hope you might reconsider your score or share further insights to help us strengthen our work.\\n\\nThank you once again for your support and thoughtful guidance throughout this process.\\nWe deeply value your time and effort in reviewing our work.\\n\\nBest regards,\\n\\nAuthors of submission 13420\"}", "{\"comment\": \"Thank you for your valuable comments! In the following section, we will address the weaknesses (W) and questions (Q) mentioned above. **The changes are marked in blue.**\\n\\n**W1** *\\\"The comparison of the proposed weight initialization technique solely with Xavier is insufficient; it should also be experimentally evaluated against other state-of-the-art weight initialization methods.\\\"*:\", \"a1\": \"Thank you for this valuable suggestion. Recently, tanh neural networks have gained attention due to their use in PINNs [1]. Our goal is to propose a new initialization method for tanh neural networks that is robust to network size and improves data efficiency. Since tanh neural networks typically use Xavier initialization [1,2,3], we compared our method against it. Based on the reviewers' suggestions, we expanded our comparisons to include more methods.\\n- We added **experiments using Xavier initialization with normalization methods** (Batch Normalization and Layer Normalization) in **Figure 4, Table 3, and Table 4**. The results demonstrate that the proposed method achieves faster convergence and improved robustness to network size compared to Xavier initialization with normalization. \\n- To the best of our knowledge, Xavier initialization is the most commonly used and effective initialization for tanh neural networks. Therefore, we conducted **additional experiments, shown in Figure 13**, to compare it with He initialization and Orthogonal initialization in ReLU neural networks. \\n\\n**W2** *\\\"Using tanh activation function in entire neural network is not good practice that it has the drawback of the vanishing gradients for the very high and very low values of x.\\\"*:\", \"a2\": \"Thank you for your comment. Tanh activation is known to have higher computational complexity and gradient issues. However, it has been experimentally shown that tanh outperforms ReLU and other activation functions in Physics-Informed Neural Networks (PINNs) [1,4].\\n- We **conducted additional experiments (Figure 13) comparing the performance of ReLU activation with He or Orthogonal initialization** against tanh neural networks with the proposed initialization. Despite the known issues with tanh activation, the results demonstrate that the proposed initialization enables tanh networks to achieve better performance. \\n- We have added an **experiment in Figure 14 to evaluate the absolute error of PINNs based on different activation functions**. Based on the results of this experiment, we emphasize the importance of the tanh activation function in PINNs.\\n\\n**W3** *\\\"The formulation is more complex than standard methods, which could complicate implementation as shown in Equation (1).\\\"*:\", \"a3\": \"Thank you for your comment. In Section 3.2, under Proposed Weight Initialization, the weight matrix is described as the sum of a matrix *D*, consisting of ones and zeros, and a noise matrix *Z*. While slightly more complex than Xavier initialization, it is expected to be straightforward to apply.\\n- We have enhanced **the clarity of the Proposed Weight Initialization in Section 3.2**. \\n- **Example code** implementing the proposed weight initialization method has been included in the Supplementary Material for reference.\\n\\n**W4** *\\\"The optimal value of \\ud835\\udefc can be highly context-dependent, varying across different architectures, datasets, and tasks, which makes it less universally applicable. Additionally, the choice of \\ud835\\udefc can interact with other hyperparameters, such as learning rate and batch size, complicating the overall tuning process during backpropagation, as described in Equation (2).\\\"*:\", \"a4\": [\"Thank you for your comment. Existing methods, such as Xavier initialization, He initialization, and Randomized Asymmetric Initialization [5], consider only the number of nodes in the hidden layers, without accounting for factors like datasets, learning rates, or batch sizes. In response to the reviewers\\u2019 comments, we have **added the following experiments**:\", \"To investigate the dependency of \\ud835\\udefc on the dataset, we examined **Figures 3 and 8** and observed that datasets drawn from different distributions resulted in approximately normal distributions at specific layers.\", \"To investigate the dependency of \\ud835\\udefc on the dataset, we conducted additional experiments with **CIFAR-100 in Tables 1 and 2**. We also extended **Table 4 to include experiments with the Diffusion and Poisson equations**.\", \"To investigate the dependency of \\ud835\\udefc on architecture, we validated the method across **various network sizes and autoencoders, as shown in Table 2 and Figure 13**.\", \"**Additional experiments were conducted in Table 3, Figure 6, Figure 7, and Figure 16** to investigate the dependency of\", \"\\ud835\\udefc on dataset size. These experiments further demonstrate the data efficiency of the proposed method compared to existing methods.\"]}", "{\"comment\": \"Although the proposed method has no significant improvement in classification tasks, the additional experiment in PINN shows consistent improvement across different settings when compared with a popular initialization method with Normalization. This improves the soundness of the paper, and I have increased the score accordingly.\\n\\nQuickly read through the references provided by author, some of them do not advocate for using tanh as an activation function, instead suggest for Swish. Some of them proposed a self-scalable version of it which may not suffer from the problem mentioned in the paper. A more detailed search reveals Swish, atan, tanh, elu are all popular choices of activation function and they are good at different problems. If my understanding is correct, the constraint imposed on the choosing activation function for PINN is only smoothness. This limits the contribution of this paper. Can the author comment on that?\\n\\nHowever, the reviewer believes the theoretical motivation for explaining an empirical observation and proposing a solution with theoretical backing is of interest to the community hence moving the score accordingly.\\n\\nThe reviewer believes it is easier to criticize than to confirm excellence, the reviewer is not an expert in PINN so I also changed the score to reflect that.\"}", "{\"title\": \"Response to Author\", \"comment\": \"Thank you for addressing the comments and questions.\\n\\n**Regarding W1&Q2** \\n* I don't see where this is mentioned in the paper \\\"all off-diagonal elements of $D_{i,j}^l$ are 0, ensuring that the mean remains 1\\\"\\n* Also please clarify what do you mean by diagonal in a rectangular matrix\\n* What about the case where $N_{l-1} < N_l$, the mean will be less than 1 even when off diagonal elements are zero.\\n\\n**Regarding Q1** \\\\\\nThank you for adding the results with layer norm and batch norm.\"}", "{\"summary\": \"This work first provides a theoretical analysis of weight initialization when exclusively using tanh\\nas an activation function. Providing reasons for the clustering behavior when initializing networks.\\nBased on the developed theory, an initialization scheme is proposed, and the hyperparameter \\u03c3z\\nis empirically determined. There are two types of experiments mainly comparing the proposed\\ninitialization with Xavier. The first type of experiment concerns the classification accuracy in the\\nearly training phase. Showing there is an improvement in accuracy across different data sets and\\nconfigurations. The second type of experiment concerns solving PDE with PINNS, showing that the\\nproposed method has a good performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Unlike optimization problems with theoretical guarantees on fixed points, weight initialization\\nis an important task in deep learning. An initialization scheme with theoretical backing can\\nhave a long-lasting impact, even just for a sub-field of deep learning.\\n\\nExperiments do show significant improvement when using PINNs to solve PDE.\", \"weaknesses\": \"The impact depended on exclusively using tanh as an activation function is fundamentally\\nbeneficial in PINNs. As the current state of the paper, there is not enough support for this.\\n\\nGiven that the experiments are not too computationally intensive and the experiment section\\nonly considers a few data sets or PDEs, the demonstrated improvement may not be general.\", \"questions\": \"In sections 4.1 and 4.2, the experiment trains for 20 or 40 epochs. Do networks converge to their best accuracy? What is the difference in accuracy when training for more epochs?\\n\\nThe experiments in section 4 are not too computationally intensive, is it possible to include more\\ndata sets or PDE can show the improvements are general?\\n\\nCan the code used in the experiment can be provided to improve reproducibility?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
ErpRu7qMq1
GETMusic: Generating Music Tracks with a Unified Representation and Diffusion Framework
[ "Ang Lv", "Xu Tan", "Peiling Lu", "Wei Ye", "Shikun Zhang", "Jiang Bian", "Ji-Rong Wen", "Rui Yan" ]
Symbolic music generation aims to create musical notes, which can help users compose music, such as generating target instrument tracks based on provided source tracks. In practical scenarios where there’s a predefined ensemble of tracks and various composition needs, an efficient and effective generative model that can generate any target tracks based on the other tracks becomes crucial. However, previous efforts have fallen short in addressing this necessity due to limitations in their music representations and models. In this paper, we introduce a framework known as GETMusic, with “GET” standing for “GEnerate music Tracks.” This framework encompasses a novel music representation “GETScore” and a diffusion model “GETDiff.” GETScore represents musical notes as tokens and organizes tokens in a 2D structure, with tracks stacked vertically and progressing horizontally over time. At a training step, each track of a music piece is randomly selected as either the target or source. The training involves two processes: In the forward process, target tracks are corrupted by masking their tokens, while source tracks remain as the ground truth; in the denoising process, GETDiff is trained to predict the masked target tokens conditioning on the source tracks. Our proposed representation, coupled with the non-autoregressive generative model, empowers GETMusic to generate music with any arbitrary source-target track combinations. Our experiments demonstrate that the versatile GETMusic outperforms prior works proposed for certain specific composition tasks.
[ "Symbolic Music Generation", "Symbolic Music Representation", "Diffusion Model" ]
https://openreview.net/pdf?id=ErpRu7qMq1
https://openreview.net/forum?id=ErpRu7qMq1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yolWzSBfHP", "xd4PmV3NGG", "OQIIx8oUme", "E9UxrEPG1P" ], "note_type": [ "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730376264274, 1730514203602, 1730133859250, 1732623570038 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14148/Reviewer_YgLQ" ], [ "ICLR.cc/2025/Conference/Submission14148/Reviewer_iMn9" ], [ "ICLR.cc/2025/Conference/Submission14148/Reviewer_ft4d" ], [ "ICLR.cc/2025/Conference/Submission14148/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces GETMusic, a framework designed for versatile symbolic music generation that supports generating any target instrument tracks based on provided source tracks. The GETMusic framework has two main components: GETScore, a novel music representation method, and GETDiff, a diffusion-based generative model.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-structured and clearly presented, and it addresses several important scenarios for conditional generation in symbolic music generation.\", \"weaknesses\": \"The biggest weakness is the limited contribution, as diffusion models for symbolic music and conditional track generation have already been explored in previous work such as AccomMontage, SongDriver. The new representation method also lacks comparisons with alternative approaches.\\n\\nThe experiments are incomplete; each contribution requires validation. For instance, it\\u2019s unclear how the representation method outperforms others or how the diffusion model improves over baseline diffusion models. Additionally, more recent works should be included in task-level comparisons, as PopMAG was introduced four years ago. \\n\\nThis also suggests that the related work survey is incomplete, omitting recent studies on conditional generation in symbolic music.\", \"questions\": \"Since dynamic factors like velocity and tempo variation are not considered in the paper, how does it ensure that the music sounds better than other representation methods, such as REMI, which are presented in sequence?\\n\\nIs GETScore symbolic-based information? If so, why is it measured in hours?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper has two main contributions:\\n\\n1) A symbolic music representation consisting of a tracks-by-timesteps grid, where each grid cell contains a pitch token and a duration token. Polyphony is handled by encoding a *combination* of pitches as a single token.\\n\\n2) A discrete diffusion framework that can handle arbitrary conditional generation tasks on the symbolic music grid, including unconditional generation. For conditional tasks, the paper introduces extra flags that indicate whether each grid cell is part of the conditioning.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1) As far as I know this is the first application of discrete diffusion to symbolic music generation.\\n\\n2) The generated samples sound quite good!\\n\\n3) Evaluation seems good, with the caveat that I don't really trust any evaluation of generative music models :)\", \"weaknesses\": \"These are not in order of importance.\\n\\n1) There are already more symbolic music generation representations and models out there than I can keep track of, and they all sound pretty decent. I consider this problem basically \\\"solved\\\" since the release of OpenAI's MuseNet (which had no accompanying academic paper). It's not clear that this paper is a significant advance on what is already possible.\\n\\n2) The CoCoNet model by Huang et al. (https://arxiv.org/abs/1903.07227) uses a setup that is very similar to this paper: multiple tracks are generated with arbitrary segments fixed as conditioning; instead of diffusion, the remaining portions are generated iteratively using Monte Carlo sampling.\\n\\n3) The paper seems confused about the taxonomy of symbolic music representations, dividing the space into \\\"image-based\\\" and \\\"sequence-based\\\" representations. Here it would make sense to examine the pitch and time axes separately. Either axis can be treated in dense (\\\"image-based\\\") or sparse (\\\"sequence-based\\\") fashion.\\n\\n With time, the main reason one might use a sparse approach is to handle expressive timing; the dense resolution becomes extremely high. This paper does not model expressive timing and thus uses a dense approach, with exactly two tokens per time step. However, it's worth noting that the approach in the paper cannot easily be extended to handle not only expressive timing, but also things like triplets, without blowing up the time dimension.\\n\\n With pitch, the main reason to use a dense approach is to handle polyphony; for monophonic music the pitch axis can be collapsed into a single value at each time. However, for many polyphonic instruments e.g. piano, the space of possible pitches is quite large, making sparsity desirable. This paper handles polyphony in a somewhat unique way, flattening variable-length combinations of notes into single tokens (see next item).\\n\\n4) The handling of polyphony is very unsatisfying. For example, all combinations of piano notes are compressed to a vocabulary of 1555 tokens. This isn't even enough to represent all pairs of piano keys! And the drum vocabulary is almost 3 times as large as the piano vocabulary; how did this end up happening?\\n\\n Here's a way polyphony could potentially have been handled that only minimally changes the setup. On the input side, instead of blowing up the vocabulary with combinations of pitches, sum (or average) the token embeddings of all active pitches. On the output side, instead of a softmax over pitch combination tokens, sample the binary presence/absence of each pitch independently (for a diffusion model, this independence should be okay since the other cells are sampled independently anyway) then sparsify. This shouldn't increase memory usage since you need to construct the softmax vector anyway.\\n\\n (It's entirely possible that you already tried the above suggestion and it ended up not working; if so please disregard.)\\n\\n5) I am not especially knowledgable about diffusion modeling, even less so about discrete diffusion. But it's not clear whether the method in this paper goes beyond the standard approach. From what I can tell, the use of condition flags is new, but that raises the question of why previous discrete diffusion methods didn't need to use such flags, and the paper provides no discussion of this.\", \"questions\": \"I guess my main question is: why should I use this approach for symbolic music modeling when there are so many others? Part of the appeal of Transformer-based sequence modeling is that basically all of the work is in the training data preparation; an off-the-shelf model architecture can be used. This seems like a lot of modeling work for not a lot of gain.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper describes a system for multi-track symbolic music generation, _GETMusic_. The authors introduce a new musical representation, _GETScore_, which compactly represents multi-track music in a two-dimensional token-based structure, and a neural model, _GETDiff_, a non-autoregressive discrete diffusion model trained to predict randomly masked tokens from symbolic music represented as a GETScore. The authors build on recent literature including piano-roll generation using diffusion models, and next-token autoregressive symbolic music generation. They evaluate their proposed system with objective musical metrics, as well as with a subjective listening test, comparing the musical quality with that of previous models. In both cases, the proposed approach performs better than the baseline.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"Overall, this paper presents a generative system, which within the context of non-autoregressive generative models for symbolic music is an advancement. More specifically:\", \"The GETScore musical representation is well motivated, formulated, and clearly described. The reviewer deems the proposed representation as novel in the context of previous literature.\", \"As the authors note, GETScore is quite compact compared to the commonly used piano-roll representation. Even judged alone, GETScore is a serious contribution, and it's very possible that this representation will be useful for a variety of generative and MIR tasks, which could be useful to the community.\", \"The details of GETScore and GETDiff are clearly presented. Specifically, Figures 2 and 3 are very helpful to understand the intricacies of GETScore.\", \"There are a significant number of experiments presented in Section 4. Although systems for generative symbolic music are notoriously hard to evaluate, the authors make a significant attempt to do so as rigorously as they can. In all cases, the proposed approach performs excellently.\", \"The musical samples provided on the demo page are impressive, suggesting that subjectively, this framework does well at symbolic music generation.\"], \"weaknesses\": [\"There is a potentially significant issue of missing references. The training objective and inference process for the proposed model GETDiff is quite similar in nature to those used in Huang et al. [1], in which the authors propose a discrete training objective, predicting missing notes from piano-rolls which have been randomly partially masked. At inference time, they use blocked Gibbs sampling, which is reminiscent of the inference procedure outlined in Section 3.2. Although the proposed approach is multi-track, and the framing of GETDiff as a discrete diffusion model changes the loss function, at the very least this work should be referenced and the similarities should be addressed in the related work. Some other relevant references are also missing, and are not present in ablation experiments, such as [2].\", \"If I understand correctly, the ablation experiment in Section 5 (L457-463) is not very well designed. By using 14 separate prediction heads and presumably sampling each column in the GETScore with a single forward pass, the training objective isn't accurately represented for GETDiff AR. As an example, when predicting the length of a note, it is impossible for the model to condition directly on the pitch of the note that it is predicting, and instead can only condition implicitly on the distribution of possible pitches predicted by the model. This introduces mathematical issues which may be responsible for the degraded performance. A much better ablation would be to compare against a transformer-decoder trained to predict the next token for a flattened version of GETScore. This should be technically possible as it would only require a context-length of 512*14=7168.\", \"There are some very minor issues about expressivity. According to our understanding, it is not possible to represent concurrent notes (e.g., chords) within a single track that have differing offsets.\", \"[1] Huang, C.Z.A., Cooijmans, T., Roberts, A., Courville, A. and Eck, D., 2019. Counterpoint by convolution. arXiv preprint arXiv:1903.07227.\", \"[2] Thickstun, J., Hall, D., Donahue, C. and Liang, P., 2023. Anticipatory music transformer. arXiv preprint arXiv:2306.08620.\"], \"questions\": [\"I would be interested in the following experiments, in addition to addressing the concerns outlined in the weakness section:\", \"Based on our understanding, the GETScore representation is directly compatible with the training and inference procedure used to train Coconet [1]. A comparison between these two generative systems when trained on the same dataset with the same musical representation (GETScore) would be valuable.\", \"Although the proposed model is quite conclusively better than the autoregressive Museformer, the problems highlighted make it unclear whether this superiority extends to autoregressive models trained faithfully on the GETScore representation (e.g., on a flattened version as described above).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We truly appreciate the time and effort reviewers've dedicated to this submission. We withdraw this submission to improve it based on reviewers' insightful suggestions.\"}" ] }
EreKmSOw7K
Time-Dependent Mirror Flows and Where to Find Them
[ "Tom Jacobs", "Chao Zhou", "Rebekka Burkholz" ]
Explicit regularization and implicit bias are often studied separately, though in practice, they act in tandem. However, their interplay remains poorly understood. In this work, we show that explicit regularization modifies the behavior of implicit bias and provides a mechanism to control its strength. By incorporating explicit regularization into the mirror flow framework, we present a general approach to better understand implicit biases and their potential in guiding the design of optimization problems. Our primary theoretical contribution is the characterization of regularizations and reparameterizations that induce a time-dependent Bregman function, with a discussion of the implications of its temporal variation. Importantly, our framework encompasses single-layer attention, and application to sparse coding. Extending beyond our core assumptions, we apply this framework to LoRA finetuning, revealing an implicit bias towards sparsity.
[ "Mirror flow", "Implicit Bias", "Time-dependent Bregman potential", "Explicit regularization", "LoRA", "Attention", "Sparse coding" ]
Reject
https://openreview.net/pdf?id=EreKmSOw7K
https://openreview.net/forum?id=EreKmSOw7K
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w90vTitBoW", "ufItf7XvZx", "rQ0GJ3o98X", "rPapGozLu0", "nV7f0DeTxc", "nGrRfIQs2d", "lsQwruoxwY", "bd8UajSdzL", "UCcoGYNxK3", "Q7b8ymtON5", "NE6sjOgvIU", "GMjaPs1SHd", "96sRTUASFN", "8ocqPOS3ci", "6n3M00lUof", "5G0uFwz6nf" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732497078498, 1737523670311, 1733220425650, 1732612260538, 1732497007615, 1732633685072, 1730607481781, 1732496208633, 1730487174356, 1732569198352, 1732551518152, 1732496713573, 1734597267309, 1732496543558, 1732496196061, 1730397570825 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4915/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4915/Authors" ], [ "ICLR.cc/2025/Conference/Submission4915/Reviewer_ooDS" ], [ "ICLR.cc/2025/Conference/Submission4915/Authors" ], [ "ICLR.cc/2025/Conference/Submission4915/Authors" ], [ "ICLR.cc/2025/Conference/Submission4915/Reviewer_ooDS" ], [ "ICLR.cc/2025/Conference/Submission4915/Authors" ], [ "ICLR.cc/2025/Conference/Submission4915/Reviewer_LGEW" ], [ "ICLR.cc/2025/Conference/Submission4915/Reviewer_fdqJ" ], [ "ICLR.cc/2025/Conference/Submission4915/Reviewer_LGEW" ], [ "ICLR.cc/2025/Conference/Submission4915/Authors" ], [ "ICLR.cc/2025/Conference/Submission4915/Area_Chair_3LuN" ], [ "ICLR.cc/2025/Conference/Submission4915/Authors" ], [ "ICLR.cc/2025/Conference/Submission4915/Authors" ], [ "ICLR.cc/2025/Conference/Submission4915/Reviewer_fdqJ" ] ], "structured_content_str": [ "{\"comment\": \"12. The choice of $A_i$:\\n\\nIndeed $A_i$ have to be chosen. We now have made this clear in the text, and discuss particular choices such as diagonal matrices and the broader class of matrices satisfying the alignment property. Based on the diagonal matrices, we illustrate that the type of bias for the eigenvalues of $K^TQ$ changes from $L_2$ to $L_1$. This implies for the general matrix that the type of bias changes from the Frobenius norm to the nuclear norm.\\n\\n13. The shift:\\n\\nThe shift refers to the time-dependent Legendre function\\u2019s global minimum. This illustrates the effect of the positional bias, which we now explain in the introduction. Moreover, we have moved the discussion on the reparameterization $log(u) - log(v)$ to the appendix to improve the flow of the paper. \\n\\n14. Scaling effect:\\n\\nTo illustrate the effect of the scaling on the type of bias we have added Figure 7, where we plot the time-dependent Legendre function.\", \"minor_comments\": \"We have addressed these comments in the manuscript.\\n\\n**2. Limitations of the framework** \\n\\nWe have relocated the limitations of the proposed framework to Appendix B1 and expanded on these limitations by analyzing the parameterization $g(w) = \\\\Pi_{i = 1}^k w_i$ with weight decay regularization $h(w) = \\\\sum_{i=1}^k w_i^2$. In this analysis, we demonstrate that it lies outside the scope of the current framework, rendering Theorem 3.1 inapplicable. Nevertheless, we can transfer our insights for $m\\\\odot w$ and $u^{2k} - v^{2k}$ to predict the implicit bias. We observe in a diagonal linear network experiment that both the type of bias and range shrinking effect are present. Specifically, when regularization is turned off, the sparse ground truth is recovered if enough regularization was applied beforehand, as observed with the $m \\\\odot w$ parameterization. However, excessive regularization causes the range shrinking effect to dominate and prevents reaching the ground truth, as similarly observed with $u^{2k} - v^{2k}$. Note that the solution $x$ is distinct from both the ground truth and $0$, which is significant as the dynamics might get stuck at zero.\\n\\n**3. Novelty of insight into quadratic parameterizations** \\n\\nThe cited papers primarily focus on the induced model capacity of regularization with reparameterization and the difference between weight decay and Frobenius regularization. However, they do not analyze the impact of explicit regularization onto the implicit bias, nor do they systematically derive the nature of the implicit bias, which as we show, changes from an $L_2$ to $L_1$ or Frobenius norm to nuclear norm during training.\\n\\nIn addition to our theoretical extensions, we gain novel and practical insights concerning **quadratic reparameterizations:**\\n1. The type of bias changes from an $L_2$ to $L_1$ Legendre function, as observed in our experiments. This insight can only be attained with our dynamic view. The time-dependent Legendre function changes during training, which changes the geometry of the training dynamics as described in Equation (7). This could have different implications, e.g., on the effect of early stopping, or explaining scaling laws that relate the amount of overparameterization to optimal training times (for best generalization), or guide simply dynamic tuning schedules for weight decay. Note that the type of bias is conceptually different from model capacity and is induced by a time-dependent Legendre function. We do not observe a fixed model capacity (with respect to either $L_2$ or $L_1$), as it changes from one to the other during training. \\n2. The effect of the regularization is determined by the time-dependent Legendre function. This allows us to turn off weight decay at the end of the training while still keeping the effect of the regularization. We have extended the **LoRA** experiment to illustrate this point more clearly. Now we turn off the weight decay after $200$ iterations for $2$ of the $4$ configurations. We observe that the ratio of the nuclear norm and Frobenius norm still decreases despite switched-off weight decay, effectively storing the regularization. This allows us to train longer with a desired norm ratio but unconstrained by the explicit regularization. This insight can be used to design better regularization schedules. For example, turning off the regularization at a desired ratio. In our LoRA experiments, this approach attains the models of lowest rank without drop in performance.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We would like to thank you for your valuable review. We believe that we have addressed all points of criticism but would be happy to address any open issues if there should remain any. Since the discussion period is approaching its end, we would highly appreciate your feedback.\"}", "{\"title\": \"Official Comment by Reviewer ooDS\", \"comment\": \"I maintain my scores after reading the authors' responses and other reviewers' comments.\"}", "{\"comment\": \"We would like to express our gratitude for your time and efforts in providing valuable comments and detailed feedback on our manuscript. We have carefully reviewed the comments and have made corresponding revisions to our manuscript, in particular, to increase its clarity. Please find a detailed point-by-point response below. In case of any open questions or concerns, we would be happy to discuss them.\\n\\n**1. Resolved clarity issues:**\\n\\n1. Definition Legendre function:\\n\\nWe have added a reference to Definition 3.1 (Li et al. 2022) when the Legendre function is introduced.\\n\\n2. Discrepancy about $Q$:\\n\\nWe define $Q$ as the inverse of the map $y \\\\rightarrow \\\\partial_y R(x,y)$. The dependence on $a_t$ comes from the (standard) mirror flow equation. Namely, we have $ \\\\partial_y R(x_t,y_t) = a_t$. We have updated the statement to make this clear.\\n \\n3. Definition Bregman function:\\n\\nWe have added a reference to Definition A.6 (Li et al. 2022) where the Legendre function is introduced.\\n\\n4. Contracting example:\\n\\nWe have added an example when the definition is given. The example is $R_a(x) = (x-a)^2$, where it is contracting on the set $(-\\\\infty, 0]$.\\n\\n5. Existence of minimizer:\\n\\nWe have reformulated it so that the set of minimizers must be non-empty.\\n\\n6. Previous settings.\\n\\nWe have added explicit references to previous work that uses the reparameterizations.\\n\\n7. Definitions of $g_i$ and $w_i$:\\n\\nWe have specified that $g_{i,j}: \\\\mathbb{R} \\\\rightarrow{R}$ and $h_{i,j} : \\\\mathbb{R} \\\\rightarrow \\\\mathbb{R}$. This makes $g_i : \\\\mathbb{R}^{m_i} \\\\rightarrow \\\\mathbb{R}$ and $w_i \\\\in \\\\mathbb{R}^{m_i}$.\\n\\n8. Clarity of Corollaries 3.1 and 3.2:\\n\\nBoth corollaries have been made precise. Theorem 3.1 applies if and only if the proposed condition holds. Furthermore, we have moved Corollary 3.2 to Appendix B.1 to improve the flow of the manuscript. We have also expanded the discussion on the limitations of the proposed framework there.\\n\\n9. Notation of regularization.\\n\\nWe agree that the current notation was not rigorous. We have updated all notations of regularization. For example, $m^2 + w^2$ is replaced with $||m||^2_{L_2} + ||w||^2_{L_2}$.\\n\\n10. The 3 effects:\\n\\nIn the revised manuscript, we define 3 main effects how explicit regularization impacts the implicit bias and described their interplay in the introduction section as follows: \\n\\n1. Type of bias: Explicit regularization changes the shape of the Legendre function (and thus the nature of the implicit regularization). For example, the shape of the Legendre function changes from an $L_2$ norm to an $L_1$ norm. Especially if our goal is sparsification to leverage the implicit $L_1$ regularization, starting with $L_2$ regularization and thus a slow sparsification, can be critical in the context of deep learning, where training overparameterized models is known to boost performance. \\n\\n2. Positional bias: The explicit regularization shifts the global minimum of the Legendre function. Usually, we expect the minimum to be zero (e.g. if we want to sparsify a model in case of $L_1$ type of bias). Yet, in the standard mirror flow Legendre function, the global minimum corresponds to the network's initialisation. During training, the explicit regularization can move the minimum closer to zero. Only when we move the minimum to 0, we would actually promote sparsity in case of implicit $L_1$ regularization, for instance. For that reason, it is important to ensure that the positional bias vanishes during training. We show that this can be achieved by explicit regularization.\\n\\n3. Range shrinking: In addition to the positional bias, the explicit regularization also shrinks the range of the attainable values of the Legendre function. For example, the $L_1$ norm of the network parameters becomes fixed during training. This can be a problem if it hinders further training. \\n\\nMoreover, we have made plots for the analyzed time-dependent Legendre functions, see Figure 7 and 9.\\n\\n\\n11. Transformer mechanism:\\n\\nWe have added a disclaimer in the text that there are more parts to consider in the case of attention. The analysis in (Sheen et al. (2024)) assumes for example that the value matrix is not trainable. The softmax operation can be seen as part of the function $f$. Therefore, in case of a non-trainable value matrix, Theorem 3.1, characterizing the implicit bias, applies. \\n\\nSheen, H., Chen, S., Wang, T., & Zhou, H.H. (2024). Implicit Regularization of Gradient Flow on One-Layer Softmax Attention. ArXiv, abs/2403.08699.\"}", "{\"comment\": \"Thank you for your response and for thoroughly reviewing the other comments.\\n\\nWe have enhanced the clarity of our manuscript based on your valuable suggestions and have added further explanations to make the material better accessible for non-experts. We are happy that our main storyline was accessible to you and we are impressed by your deep understanding of the subject and our work. Please note that the main content of the paper remains unchanged. All revisions have been detailed in the accompanying comments, and as such, a complete review of the manuscript is not required. \\n\\nOur novel insight in comparison to the cited paper is that we find that the Legendre function is time-dependent and evolves from $L_2$\\u200b to $L_1$\\u200b regularization. This implies that the implicit bias is not $L_1$. It depends on the strength of the weight decay and training time what regularization we encounter. It can be $L_2$ or $L_1$ or something in between. For that reason, we would also receive different results if we were to train a model (with original parameters $x$) with constant $L_1$ regularization.\"}", "{\"summary\": \"This paper show that explicit regularization modifies the behavior of implicit bias and provides a mechanism to control its strength. The primary theoretical contribution is the characterization of regularizations and parameterizations that induce a time-dependent Bregman potential, with a discussion of the implications of its temporal variation. This framework encompasses single-layer attention, and application to sparse coding.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper presents sufficient conditions for incorporating explicit regularizations into the mirror flow framework and characterizes their effects by analyzing three main aspects of implicit bias: shifts in positional bias, the type of bias introduced, and the reduction in the range of values.\\n\\n2. This paper proposes a systematic method for identifying these regularizations and establishes a general convergence result within this framework.\\n\\n3. This paper emphasizes the impact of regularization on implicit bias, demonstrating these effects in experiments such as sparse coding, transformer attention mechanisms, and LoRA fine-tuning in large language models.\\n\\n4. This paper reveals that weight decay modulates the degree of sparsification brought about by quadratic parameterizations, including those in attention mechanisms and LoRA adjustments.\", \"weaknesses\": \"1.However, their interplay remains poorly understood. This inference lack support.\\n\\n2. More experiments on real world datasets are needed to verify the validity of their method.\\n\\n3. This paper lacks the motivation about the behavior of implicit bias\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**3. Motivation and significance of implicit bias framework:**\\n\\nThe main goal of characterizing implicit bias is to understand the role of overparameterization in deep learning and how it contributes to its success. Commonly, the corresponding regularization is tied to the specifics of the parameterization, neural network architecture, and the optimizer. Yet, its strength is usually assumed to be fixed during training. \\n\\nOur framework extension changes this picture, as it aims to understand the effect of explicit regularization on the implicit bias (which is almost always applied in practice). This is significant, as the explicit regularization changes the nature and strength of the implicit bias of the studied architecture and gives us a way to control and therefore exploit it. This nature and strength can even change dynamically during training. Importantly, this dynamic change is critical to address issues with range shrinking and positional bias, and thus guarantee convergence. \\n\\nFrom a conceptual point of view, our results are of independent interest, as they extend the mirror flow framework to a time-dependent mirror flow framework.\\n\\nFrom a practical point of view, we gain insights into modern architectural characteristics such as attention and LoRA, and how weight decay induces sparsity. To boost their performance, we propose to switch-off weight decay in the last training rounds. (Note that this switch-off enables convergence of a minimizer of the original optimization objective $f$ according to Theorem 3.2.)\\n\\nWe have highlighted these contributions more clearly in the introduction of our revised manuscript.\\n \\n**Practically relevant insights:**\", \"we_gain_two_key_novel_insights_of_our_intricate_extension_for_quadratic_reparameterizations\": \"**\\n1. The type of bias changes from an $L_2$ to $L_1$ Legendre function, as observed in our experiments. This insight can only be attained with our dynamic view. The time-dependent Legendre function changes during training, which changes the geometry of the training dynamics as described in Equation (7). This could have different implications, e.g., on the effect of early stopping, or explaining scaling laws that relate the amount of overparameterization to optimal training times (for best generalization), or guide simply dynamic tuning schedules for weight decay. Note that the type of bias is conceptually different from model capacity and is induced by a time dependent Legendre function. We do not observe a fixed model capacity (with respect to either $L_2$ or $L_1$), as it changes from one to the other during training. \\n2. The effect of the regularization is determined by the time-dependent Legendre function. This allows us to turn off weight decay at the end of the training while still keeping the effect of the regularization. We have extended the **LoRA** experiment to illustrate this point more clearly. Now we turn off the weight decay after $200$ iterations for $2$ of the $4$ configurations. We observe that the ratio of the nuclear norm and Frobenius norm still decreases despite switched-off weight decay, effectively storing the regularization. This allows us to train longer with a desired norm ratio but unconstrained by the explicit regularization. This insight can be used to design better regularization schedules. For example, turning off the regularization at a desired ratio. In our LoRA experiments, this approach attains the models of lowest rank without drop in performance.\"}", "{\"summary\": \"The paper studies mirror flows, with the goal of identifying how explicit regularization and the inducing parameterization affect implicit regularization.\\nFor certain parameterizations and regularizations, it characterizes their impact.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"- **S1: Novelty and generality** Making the interplay of explicit and implicit\\n regularization concrete is certainly a relevant goal, as it provides the\\n possibility to modulate the explicit regularization with time to achieve a\\n specific implicit regularization. The mirror flow framework, in which the\\n paper expands on this goal, provides a general framework, and the presented\\n results seem general.\", \"weaknesses\": \"I found the paper extremely challenging to follow. This may be because I am only\\npartially familiar with mirror flows (reflected by my confidence), but I also\\nbelieve the presentation is at many points not self-consistent, lacking clarity,\\nand missing interpretation and relation to practical contexts. In the following,\\nI am pointing out some aspects that prevented me from comprehending and\\nappreciating the paper's contribution:\\n\\n- **W1 Self-consistency and clarity:** The introduction to mirror flows is\\n confusing.\\n\\n Specifically, I struggled to keep up with relating the presented math in\\n Sections 3+4 to that in the introduction: the primal objective $f(x)$,\\n (Equation 4), primal explicit regularization $h(x)$ (Equation 4), dual\\n objective $f(g(w))$, and dual explicit regularization $h(w)$ (Equation 5). I\\n think the paper would be much clearer if Sections 3+4 used notation that made\\n the connections to these objects obvious. Sometimes, it is also hard to follow\\n along because objects do not have names, for instance there is a function $R$\\n that is always referred to as 'the function $R$'; is there a more meaningful\\n name, e.g. is it related to the distance-generating function in mirror\\n descent? Same with the 'Bregman potential', that is mentioned in the abstract,\\n but does not seem to show up in the main text. I also found the introduction\\n to mirror flows in A.1 not very helpful as it mainly consists of an\\n enumeration of definitions.\\n\\n Another problem I had in keeping up with the presentation is that some of the\\n steps were not clearly motivated or outlined. Therefore, I could only\\n acknowledge the existence of some results, specifically the examples on\\n different parameterizations in Examples 3.1+3.2, as well as their paragraphs\\n in Sections 4+5, but not understand how they contribute to the bigger picture.\\n\\n Lastly, the authors introduce the concepts of positional bias and range\\n shrinking in the introduction, but never explain what exactly they mean.\\n\\n- **W2 Interpretation and relation to practical contexts** I could not follow\\n certain interpretations, explanations, and motivations, e.g. why is it\\n interesting to study the setup in Corollary 3.2? Are the parameterizations\\n presented in the examples relevant in practise? How does the paragraph below\\n Equation 8 relate to its math? What is the theory's prediction for LoRA in\\n Section 5, and is it reflected by the empirical results? I think these are\\n currently just not explained clearly enough, making it hard to assess the\\n paper.\", \"questions\": \"Please see W1 and W2.\\n\\nI am willing to re-assess my score if the authors find ways to clarify the presentation and motivation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the detailed response, I have read it and the other reviews carefully. As stated in my review, the original manuscript suffered from major issues in clarity and also in terms of the significance of some of the contributions. I appreciate the effort in updating the paper during the discussion period to address these concerns. It seems that the updated version has improved upon the initial submission, however, I believe the extent of changes essentially necessitates another full review of the paper, and is not suitable for the author-reviewer discussion period. Unfortunately, I therefore cannot recommend acceptance of the paper.\\n\\nRegarding the novelty of identifying that $L_2$ regularization in matrix factorization (equivalently, linear neural networks) leads to a bias towards low nuclear norm. I agree that technically the result in the cited work is not identical to that provided in this paper. Though, it is worth noting that it does imply that using an $L_2$ regularization means you learn a solution which has minimal nuclear norm, out of all possible matrices that attain the same loss. In that sense it is not true that the insight of $L_2$ explicit regularization being converted to a nuclear norm regularization can only be attained via the dynamic view proposed in this work. With that said, the other potential implications regarding early stopping or explaining scaling laws are not explained by prior work. It may be worth emphasizing them.\"}", "{\"comment\": \"Thank you for your detailed response! I have no follow-up questions for now.\"}", "{\"comment\": \"**W2: Interpretation and relation to practical contexts:**\\n\\nThe main practical implication of our theoretical insights is that we can change and control the implicit bias by explicit regularization, even dynamically during training. Specifically, we use our findings to derive guiding principles for sparse coding, attention, and LoRA finetuning. For instance, we gain insights into how weight decay promotes effectively low rank solutions. While attention requires sufficiently strong regularization for $L_1$ to dominate late in training, LoRA finetuning finds low rank solutions if weight decay is turned off in later training rounds. More details on the exposition follow below.\\n\\n**Different parameterizations:** The goal of Corollary 3.2 is to explore the limitations of the proposed framework. We have moved Corollary 3.2 to the appendix and have provided an additional class of parameterizations. The additional class is described by the reparameterization $g(w) = \\\\Pi_{i = 1}^k w_i$ and $h(w) = \\\\sum_{i = 1}^k w_i^2$, with $k >2$. We show that this class does not satisfy the sufficient condition to apply Theorem 3.1, which characterizes the time-dependent mirror flow. Nevertheless, we can make predictions based on the developed framework, which we demonstrate in the context of diagonal linear network experiments. We observe both the type of bias and range shrinking effect for the reparameterization with $k = 3$. The type of bias prediction follows from the analyzed reparameterization $m \\\\odot w$ and the range shrinking follows from the reparameterization $u^{2k} - v^{2k}$.\\nWe have included references to previous work where specific reparameterizations were studied to highlight the broader relevance of our framework. For instance, reparameterizations such as $m \\\\odot w$ (Pesme et al. 2021) and $u^{2k} -v^{2k}$ (Woodworth et al. 2020) have been used to gain insights into the training dynamics of neural networks, specifically, the effect of stochasticity and overparameterization. They have also been exploited for sparsification (Jacobs & Burkholz 2024). \\n\\nPesme, S., Pillaud-Vivien, L., & Flammarion, N. (2021). Implicit Bias of SGD for Diagonal Linear Networks: a Provable Benefit of Stochasticity. Neural Information Processing Systems.\\n\\nWoodworth, B.E., Gunasekar, S., Lee, J., Moroshko, E., Savarese, P.H., Golan, I., Soudry, D., & Srebro, N. (2019). Kernel and Rich Regimes in Overparametrized Models. ArXiv, abs/2002.09277.\\n\\nJacobs, T., & Burkholz, R. (2024). Mask in the Mirror: Implicit Sparsification. ArXiv, abs/2408.09966.\\n\\nFor the derivation of Eq. (8), we use Theorem 3.3. The main step is inverting the function $Q_a(\\\\mu)$. We added an explanation after the equation.\\n\\n**Practically relevant insights:**\", \"we_gain_two_key_novel_insights_of_our_intricate_extension_for_quadratic_reparameterizations\": \"**\\n1. The type of bias changes from an $L_2$ to $L_1$ Legendre function, as observed in our experiments. This insight can only be attained with our dynamic view. The time-dependent Legendre function changes during training, which changes the geometry of the training dynamics as described in Equation (7). This could have different implications, e.g., on the effect of early stopping, or explaining scaling laws that relate the amount of overparameterization to optimal training times (for best generalization), or guide simply dynamic tuning schedules for weight decay. Note that the type of bias is conceptually different from model capacity and is induced by a time dependent Legendre function. We do not observe a fixed model capacity (with respect to either $L_2$ or $L_1$), as it changes from one to the other during training. \\n2. The effect of the regularization is determined by the time-dependent Legendre function. This allows us to turn off weight decay at the end of the training while still keeping the effect of the regularization. We have extended the **LoRA** experiment to illustrate this point more clearly. Now we turn off the weight decay after $200$ iterations for $2$ of the $4$ configurations. We observe that the ratio of the nuclear norm and Frobenius norm still decreases despite switched-off weight decay, effectively storing the regularization. This allows us to train longer with a desired norm ratio but unconstrained by the explicit regularization. This insight can be used to design better regularization schedules. For example, turning off the regularization at a desired ratio. In our LoRA experiments, this approach attains the models of lowest rank without drop in performance.\"}", "{\"metareview\": \"In this work the authors consider the effect that explicit regularization has on the implicit regularization induced by optimization dynamics. This is an interesting question, as adding explicit regularization clearly changes the optimization dynamics and by extension the implicit regularization, which is a worthwhile topic to understand in more detail. Unfortunately, however, many of the reviewers found the clarity of the manuscript to be lacking to the point of being difficult to assess the significance of the contribution. Beyond manuscript clarity there are also questions about whether the formulations and parameterizations studied will be of interest for practical problems. For these reasons I am rejecting the paper but would encourage the authors to clarify their presentation along with the motivation/justification for the studied formulations and parameterizations for submission to a future conference.\", \"additional_comments_on_reviewer_discussion\": \"The authors appear to have made substantial modifications to the paper in response to the initial reviews. However, while this was appreciated by the reviewers, many of the edits were to technical definitions and statements/claims. Given the extent of the changes the reviewers suggested that a full re-review of the paper would be required in a future venue.\"}", "{\"comment\": \"We would like to express our gratitude for your time and efforts in providing valuable comments and detailed feedback on our manuscript. We have carefully reviewed the comments and have made corresponding revisions to our manuscript, in particular, to increase its clarity. Please find a detailed point-by-point response below. In case of any open questions or concerns, we would be happy to discuss them.\\n\\n\\n**W1. Self-consistency and clarity:**\\n\\nWe have clarified the connections between the primal and dual objectives by discussing the dual gradient flow associated with Eq. (1), which leads directly to the primal mirror flow described in Eq. (4). Additionally, we present the dual gradient flow corresponding to Eq. (3) prior to introducing Theorem 3.1, where we derive the associated primal time-dependent mirror flow. To ensure clarity, we have added subscripts to all gradients and Hessians indicating if it is with respect to the primal or dual variable. Moreover, we now explicitly and consistently refer to $g$ as the reparameterization and $h$ as the explicit regularization.\\n\\nThe implicit regularizer $R$ can indeed be interpreted as a distance-generating function, akin to those used in mirror descent. In the revised manuscript, we now refer to the function $R$ as the Legendre function or time-dependent Legendre function. For additional context, we also mention the link to mirror descent. Note, in the specific case for convergence and quadratic parameterizations, $R$ becomes a Bregman function. For precise definitions, we refer to (Li et al. (2022)), where both the definition of a Legendre and Bregman function can be found in Definitions A.6 and 3.2. Finally, we agree that the term \\u201cpotential\\u201d can be confusing and have replaced it by \\u201cfunction\\u201d. \\n\\nWe refer to previous works to put the considered parametrizations in greater context which we mention in W2.\\n\\nIn the revised manuscript, we define 3 main effects how explicit regularization impacts the implicit bias and described their interplay in the introduction section as follows: \\n\\n1. Type of bias: Explicit regularization changes the shape of the Legendre function (and thus the nature of the implicit regularization). For example, the shape of the Legendre function changes from an $L_2$ norm to an $L_1$ norm. Especially if our goal is sparsification to leverage the implicit $L_1$ regularization, starting with $L_2$ regularization and thus a slow sparsification, can be critical in the context of deep learning, where training overparameterized models is known to boost performance. \\n\\n2. Positional bias: The explicit regularization shifts the global minimum of the Legendre function. Usually, we expect the minimum to be zero (e.g. if we want to sparsify a model in case of $L_1$ type of bias). Yet, in the standard mirror flow Legendre function, the global minimum corresponds to the network's initialisation. During training, the explicit regularization can move the minimum closer to zero. Only when we move the minimum to 0, we would actually promote sparsity in case of implicit $L_1$ regularization, for instance. For that reason, it is important to ensure that the positional bias vanishes during training. We show that this can be achieved by explicit regularization.\\n\\n3. Range shrinking: In addition to the positional bias, the explicit regularization also shrinks the range of the attainable values of the Legendre function. For example, the $L_1$ norm of the network parameters becomes fixed during training. This can be a problem if it hinders further training. \\n\\nMoreover, we have made plots for the analyzed time-dependent Legendre functions, see Figures 7 and 9.\\n\\n\\nLi, Z., Wang, T., Lee, J.D., & Arora, S. (2022). Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence to Mirror Descent. ArXiv, abs/2207.04036.\"}", "{\"comment\": \"We would like to express our gratitude for your time and efforts in providing valuable comments on our manuscript. We have carefully reviewed the comments and have made corresponding revisions to our manuscript in response to your insightful feedback. Please find a detailed point-by-point response below. In case of any open questions or concerns, we would be happy to discuss them.\\n\\n**1. Three main effects of explicit regularization on implicit bias:**\\n\\nIn the revised manuscript, we define 3 main effects how explicit regularization impacts the implicit bias and described their interplay in the introduction section as follows: \\n\\n1. Type of bias: Explicit regularization changes the shape of the Legendre function (and thus the nature of the implicit regularization). For example, the shape of the Legendre function changes from an $L_2$ norm to an $L_1$ norm. Especially if our goal is sparsification to leverage the implicit $L_1$ regularization, starting with $L_2$ regularization and thus a slow sparsification, can be critical in the context of deep learning, where training overparameterized models is known to boost performance. \\n\\n2. Positional bias: The explicit regularization shifts the global minimum of the Legendre function. Usually, we expect the minimum to be zero (e.g. if we want to sparsify a model in case of $L1$ type of bias). Yet, in the standard mirror flow Legendre function, the global minimum corresponds to the network's initialisation. During training, the explicit regularization can move the minimum closer to zero. Only when we move the minimum to 0, we would actually promote sparsity in case of implicit $L1$ regularization, for instance. For that reason, it is important to ensure that the positional bias vanishes during training. We show that this can be achieved by explicit regularization.\\n\\n3. Range shrinking: In addition to the positional bias, the explicit regularization also shrinks the range of the attainable values of the Legendre function. For example, the $L_1$ norm of the network parameters becomes fixed during training. This can be a problem if it hinders further training. \\n\\nWe have included additional experiments in Appendix B1 to demonstrate the three explained effects. For example, Figure 4b in the revised paper shows that, with moderate regularization, there is a clear type-of-bias change from $L_2$ norm to $L_1$ norm. In contrast, higher levels of regularization do not exhibit the same behavior, which we attribute to the range-shrinking effect. In consequence, for higher levels of regularization we have found an additional contributor that reduces the trainability of the neural network. Moreover, we have made plots for the analyzed time-dependent Legendre functions, see Figures 7 and 9.\\n\\n**2. More experiments on real world data:**\\n\\nWe have extended the LoRA experiment. In this revised setup, weight decay is turned off after $200$ iterations for $2$ of the $4$ configurations. Interestingly, we observe that the ratio between the nuclear norm and Frobenius norm continues to decrease, effectively storing the regularization effect within the time-dependent Legendre function. This behavior allows us to train longer with a desired ratio unconstrained by the explicit regularization. Therefore, this insight can be used to design better regularization schedules (e.g., turning off regularization once a desired ratio is achieved).\\n\\nFurthermore, we have included an additional experiment on ImageNet with a transformer model in Appendix C. We finetune a pretrained model with varying weight decay and two learning rate schedules. To fairly compare the different learning rate schedules, we use the cumulative sum of the learning rate schedules as the x-axis of the plots. The results show that ratios decay as predicted by the theory, i.e., higher weight decay leads to a large decrease in the ratio. Moreover, we observe that for each weight decay configuration, both learning rate schedules lead to a similar decrease, especially for SGD.\"}", "{\"summary\": \"Prior work has shown that, under some conditions, the optimization dynamics of overparameterized models can be formulated as a mirror flow (Li et al. 2022). The current work extends that framework to explicitly regularized losses. In particular, it establishes a sufficient condition under which gradient flow over the explicitly regularized objective can be described as a (possibly time-varying) mirror flow. This enables proving convergence to a minimizer when the loss function is convex. Then, for several types of overparameterizations, explicitly writes the potential of the mirror flow induced by certain explicit regularizers, and discusses implications to the resulting implicit bias.\\n\\nExperiments support the theoretical results.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Explicit regularizers, such as $L_2$ regularization, are commonly applied in practice. Hence, understanding how they modify the implicit bias of optimization for a given model is an important question in the theory of deep learning. This paper makes progress towards addressing this question. While the logarithmic parameterization $\\\\log (u) - \\\\log (v)$ is quite unorthodox (it is not clear where one may encounter it), the $m \\\\odot u$ and $u^{2n} - v^{2n}$ parameterizations have been widely studied in the past in the context of linear diagonal networks, and so may be of interest to the community.\", \"weaknesses\": \"1. Perhaps the main weakness of the paper is a rather severe lack of clarity, which in turn makes it difficult to assess the implications and significance of the results. As detailed below, some definitions are missing, notation is often used without proper introduction, some theorem statements are vague, terms such as \\\"positional bias\\\" are used without any intuitive or formal definition, and the implications of results are also not sufficiently discussed.\\n\\n2. The significance of the results may be limited due to the parameterizations that they support. I find that the discussion around which parameterizations and regularizations are captured by the proposed framework and which are not can be strengthened. In particular, is there hope of using the framework for analyzing more practical models?\\n\\n3. The fact that $L_2$ regularization in matrix factorization induces a bias towards low nuclear norm for the matrix product has been shown in prior work (see, e.g., [1]). The significance and novelty of the empirical observation for attention layers and LoRA is therefore limited, in my opinion (a discussion on the relation between $L_2$ regularization and nuclear norm in attention layers also appears in [2]).\\n\\nOverall, while the paper has potential to further our understanding of how explicit regularization affects the implicit bias of optimization, at its current form I do not believe it is suitable for publication. I would recommend the authors to thoroughly revise the paper and improve its clarity, for which I hope the comments below can be useful. Furthermore, the significance of the results can be strengthened by further mapping out which parameterizations and regularizations fall under the proposed framework and which do not. Currently, the most practical models captured by the framework are matrix factorizations, which appear as part of different factorized layers. However, as mentioned above, the fact that an $L_2$ regularization translates to nuclear norm minimization for matrix factorization is already known (e.g., [1]) and does not necessitate the more complex machinery.\", \"detailed_remarks_regarding_clarity\": [\"In Definition 3.1, it is not specified what the definition of a Legendre function is. I assume the intention was to use the same definition from Li et al. 2022. I recommend either stating the definition explicitly or at least referring to an existing definition in the literature (e.g. that from Li et al. 2022).\", \"In line 223, it is stated that $Q$ is defined in Theorem 3.1, however, it is not clear what the precise definition of $Q$ is. Seems that it is implicitly defined by $y = Q(x, a_t)$? Though since $y$ is actually $h(w)$ and $x$ is actually $g(w)$ there is no dependence on $a_t$, which leads me to believe the definition of $Q$ is not $Q(x, a_t) = h(w)$ as one may currently infer. Please explicitly define $Q$.\", \"In Definition 3.2, Bregman functions are not defined. I would recommend either explicitly defining this concept or at least referring to a specific place where they are defined. I assume the intended definition is Definition A.6 from Li et al. 2022.\", \"It can be useful to give examples for contracting parameterized Bregman functions around Definition 3.2 to help the reader understand this concept.\", \"In line 242 it is stated \\u201cassume that $\\\\nabla f$ is locally Lipschitz and $argmin \\\\\\\\{ f(x) : x \\\\in dom R_{a_\\\\infty} \\\\\\\\}$\\u201d. It is not clear what the second part of the assumption is, that the argmin exists?\", \"In line 267 it is stated that the result for separable parameterizations encompasses \\u201call previous settings\\u201d. Which previous settings does this refer to?\", \"In Corollary 3.1, what do $g_{i, j}$ and $w_{i, j}$ stand for? Is it assumed here that $g(w)$ and $w$ are matrices and that $i,j$ indexes an entry in them? In that case, what are $g_i$ and $w_i$?\", \"In Corollary 3.1, I find the statement \\u201cTo apply Theorem 3.1 we need that\\u2026\\u201d to be too vague. Is the result that the conditions of Theorem 3.1 hold if and only if this holds? Or is it just a sufficient condition? Either way it is worth explicitly stating what the conditions are. An analogous comment applies to Corollary 3.2.\", \"In line 345, the regularization is defined as $y = m^2 + w^2$, where $m$ and $w$ are vectors, but isn\\u2019t the regularization $y = h(w)$ supposed to be a scalar?\", \"In line 361, it is not clear what \\u201cpositional bias\\u201d means. This term is used in several places in the paper, but nowhere is it defined even on an intuitive level. What does \\u201cpositional bias\\u201d refer to?\", \"The paragraph starting at line 364 claims to consider an attention layer. I find this to be quite misleading as the effect of the remaining components in an attention layer (value matrix and softmax) are not considered.\", \"In line 366, what are the $A_j$ matrices in this case? Are they not determined by the parameterization, in which case whether or not the condition holds is determined rather than something that you can/need to assume? It is also not clear where the nuclear norm comes from in Theorem 3.3.\", \"In line 411 it is stated that \\u201cThe shift is centered at\\u2026\\u201d. What shift does this refer to? It is not clear what is being shifted and where.\", \"In line 418 it is stated that the rescaling changes the implicit bias from $L_1$ to $L_2$ regularization. It is worth clarifying why this is the case.\", \"Additional (more minor comments):\", \"Typo in line 32: an unnecessary apostrophe.\", \"I find Figure 1 to be too vague (related to the remarks on clarity above). In particular, aside from the $L_2$ to $L_1$ change in implicit bias, it is not clear what \\u201cpositional bias\\u201d or \\u201crange shrinking\\u201d refers to. Would be best to either make it more understandable from the figure itself or from the caption.\", \"In line 73, Pesme et al. 2024 is cited for showing that gradient descent converges to the solution with lowest $L_1$ distance from initialization for overparameterized linear regression. I believe this is not the correct citation, as Pesme et al. 2024 consider classification problems. Also, this claim is not true for all overparameterized linear models, rather for some parameterizations (e.g. linear diagonal networks) and under certain technical conditions (e.g. small initializations).\", \"[1] Dai, Z., Karzand, M., & Srebro, N. Representation costs of linear neural networks: Analysis and design. NeurIPS 2021.\", \"[2] Khodak, M., Tenenholtz, N., Mackey, L., & Fusi, N. Initialization and regularization of factorized neural layers. ICLR 2021.\"], \"questions\": \"--\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
ErQPdaD5wJ
AutoUAD: Hyper-parameter Optimization for Unsupervised Anomaly Detection
[ "Wei Dai", "Jicong Fan" ]
Unsupervised anomaly detection (UAD) has important applications in diverse fields such as manufacturing industry and medical diagnosis. In the past decades, although numerous insightful and effective UAD methods have been proposed, it remains a huge challenge to tune the hyper-parameters of each method and select the most appropriate method among many candidates for a specific dataset, due to the absence of labeled anomalies in the training phase of UAD methods and the high diversity of real datasets. In this work, we aim to address this challenge, so as to make UAD more practical and reliable. We propose two internal evaluation metrics, relative-top-median and expected-anomaly-gap, and one semi-internal evaluation metric, normalized pseudo discrepancy (NPD), as surrogate functions of the expected model performance on unseen test data. For instance, NPD measures the discrepancy between the anomaly scores of a validation set drawn from the training data and a validation set drawn from an isotropic Gaussian. NPD is simple and hyper-parameter-free and is able to compare different UAD methods, and its effectiveness is theoretically analyzed. We integrate the three metrics with Bayesian optimization to effectively optimize the hyper-parameters of UAD models. Extensive experiments on 38 datasets show the effectiveness of our methods.
[ "anomaly detection", "hyper-parameter optimization" ]
Accept (Poster)
https://openreview.net/pdf?id=ErQPdaD5wJ
https://openreview.net/forum?id=ErQPdaD5wJ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zs4dx4p2a4", "txDllimvax", "sMdl2jUrhR", "qqzX7VGpiy", "kv7C3aS4qy", "kryd9zKEp0", "hHgUEuzuXv", "dl6GdkniJj", "dKtmDcDYW8", "SWIgRiUr2I", "RD9v7M7HM4", "Q1M2YlOVsO", "PLEwoPguZ4", "JX7DewqmZ9", "J6l7XCLgJb", "J2fKvMX0JA", "IahPqtlKcG", "GDo2YHdh3o", "2yBhoKR2xM", "2vFQEKQfob", "1U9spr1p7V" ], "note_type": [ "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1729857062896, 1732017845529, 1731982570065, 1737523636739, 1732027626134, 1732541093381, 1730307493498, 1730283917514, 1731983381288, 1733154738707, 1731937291535, 1731919629295, 1731936488090, 1731936891346, 1732451742514, 1734617705123, 1731983517265, 1732485322947, 1731981992924, 1731983122540, 1733061505263 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4385/Reviewer_FcaY" ], [ "ICLR.cc/2025/Conference/Submission4385/Reviewer_FcaY" ], [ "ICLR.cc/2025/Conference/Submission4385/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4385/Authors" ], [ "ICLR.cc/2025/Conference/Submission4385/Authors" ], [ "ICLR.cc/2025/Conference/Submission4385/Reviewer_7zTy" ], [ "ICLR.cc/2025/Conference/Submission4385/Reviewer_DpZ3" ], [ "ICLR.cc/2025/Conference/Submission4385/Authors" ], [ "ICLR.cc/2025/Conference/Submission4385/Authors" ], [ "ICLR.cc/2025/Conference/Submission4385/Authors" ], [ "ICLR.cc/2025/Conference/Submission4385/Authors" ], [ "ICLR.cc/2025/Conference/Submission4385/Authors" ], [ "ICLR.cc/2025/Conference/Submission4385/Authors" ], [ "ICLR.cc/2025/Conference/Submission4385/Authors" ], [ "ICLR.cc/2025/Conference/Submission4385/Area_Chair_AFTz" ], [ "ICLR.cc/2025/Conference/Submission4385/Authors" ], [ "ICLR.cc/2025/Conference/Submission4385/Reviewer_DpZ3" ], [ "ICLR.cc/2025/Conference/Submission4385/Authors" ], [ "ICLR.cc/2025/Conference/Submission4385/Authors" ], [ "ICLR.cc/2025/Conference/Submission4385/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces different metrics for the optimization of hyperparameters in unsupervised anomaly detection. The authors formulate the problems of hyperparameter tuning and model selection as a maximization of the expected evaluation metric for a random dataset containing anomalies. They then propose three metrics that approximate this unknown expected value (up to some monotonically increasing function).\\nThe key contributions of the paper are (i) the introduction of three evaluation metrics for hyperparameter optimization, (ii) theoretical analysis of the third metric and (iii) empirical experiments comparing the optimization based on the proposed evaluation metrics with other model selection methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper has a red line and the structure from internal to external evaluation metrics is comprehensible. The problem of hyperparameter optimization is well motivated, and extensive empirical experiments are provided.\\n\\nThe proposed metrics are, generally, clearly defined, and the approach is innovative. \\n\\nThis paper could be a helpful contribution to unsupervised hyperparameter tuning.\", \"weaknesses\": \"The proposed metrics seem to work only under strong assumptions, no theoretical guarantees in terms of the expected false positive/negative rate are provided and the results of experiments seem to be overly optimistic with results being reported on the same data that was used for model selection and hyperparameter tuning.\\n\\nGiven the current shortcomings (see below), I recommend to reject the paper. I would be willing to increase the score, given that the below concerns are addressed.\\n\\n---\", \"update\": \"Based on the additional results and explanations provided by the authors, I suggest to accept the paper.\", \"questions\": [\"Comments/Questions:\", \"Hyperparameter tuning is a challenge in other unsupervised learning tasks as well. Which relevant approaches are there, that are worth being added to the \\\"Related Work\\\" section?\", \"The internal evaluation metrics depend on hyperparameters themselves, which does not seem to be an advantage. How robust are they w.r.t. different choices of the hyperparameters?\", \"Definition 2: Are the true labels supposed to be in $\\\\{0, 1\\\\}$? Generally, the anomaly scores are not restricted to the interval $[0, 1]$. What should be passed to the evaluation metric $\\\\mathcal{E}?$ The anomaly scores, or binary predictions based on them?\", \"L205: \\\"While these tail points are part of the normal data, their distance from the majority\", \"can make them resemble potential anomalies. They should be recognized by a good UAD model.\\\" -> Should a good UAD model recognize these points as anomalies or normal data? From my understanding, a *good* UAD model should not only perform well for data close to the majority, but also for data points at the boundary of \\\"normal\\\" behavior\", \"What is the intuition of using the mean and median to define RTM? It seems more natural to use the $mean(\\\\{s_i|s_i < s_{top\\\\tau\\\\%}\\\\})$ instead of the median (or the median in both cases).\", \"Can the RTM be interpreted so that an error of $\\\\tau\\\\%$ is accepted? Similar to the significance level of a statistical test?\", \"L249: What is the intuition behind the statement: \\\"RTM may overlook a case in which all scores are uniformly distributed...\\\"?\", \"L255: \\\"Based on Assumption 1, there exists (?) and ...\\\" -> What does exist?\", \"L257: What is $\\\\xi$ in equation (4)? There seems to be something missing.\", \"What is the intuition of the definition of $AG(\\\\xi, p(s))$? Why are the probabilities $w_0(k)$ and $w_1(k)$ used as weights? This seems to prefer values of $k$ such that $s_k$ is close to the median of the sample $s$.\", \"Definition 5: $AG(\\\\xi)$ is not defined, only $AG(\\\\xi; p(s))$ - and the latter not correctly\", \"Definition 5: Why is the conditional expectation needed? $\\\\xi \\\\ge s_{thr}$ by definition, so the conditional expectation reduces the the expectation, right?\", \"L288: If overfitting w.r.t. the training data is an issue, couldn't this simply be fixed by using a train-test-split or cross validation?\", \"Definition 6: Here the underlying assumption is that the \\\"normal\\\" data is approximately normal distributed with distribution $\\\\mathcal{N}(\\\\mu, diag(\\\\sigma))$. This seems to be a very strong assumption, especially in applications with real data.\", \"Definition 6: Why are the coordinates assumed to be uncorrelated (as modeled with the diagonal matrix)? Why are covariances between the coordinates not considered? This assumption is **very** restrictive and even in the examples not met (see, e.g., Fig. 3)\", \"L372: Geometric interpretation: Why is the outline generally a hypersphere? That is, why is this the case if the variance across different coordinates varies?\", \"Figure 6: What does the y-axis represent? The correlation of $\\\\nu$ with AUC? In this case, high values would be good, so the results for NPD/NDP seem ambiguous.\", \"Are the final results, e.g. in Table 1, calculated based on the validation data, i.e., the same data that was used for model selection and hyperparameter tuning? If so, the results are probably overly optimistic for the model selection methods.\", \"Some considered datasets contain unrealistic many anomalies. Can \\\"anomalies\\\" that occur more than 10-20\\\\% of the time be considered as real anomalies?\", \"Additional Feedback\", \"Definition 1: \\\"normal distribution\\\" (and the term \\\"normal\\\" in general) is ambiguous: clearly the distribution of normal data points is meant and not the normal distribution $\\\\mathcal{N}(\\\\mu, \\\\sigma)$, but the terms \\\"normal\\\" (as typical) and \\\"normal\\\" (as Gaussian) should be distinguished better\", \"Definition 1: $p_0$ and $p_1$ are not defined - probably the density functions of $D_0$ and $D_1$?\", \"Definition 2 (L. 170-171): The joint distribution of $(\\\\tilde{X}, \\\\tilde{Y})$ could be specified instead of the vague \\\"Let $\\\\tilde{X}$ be a random unseen dataset containing both normal samples and anomalous samples\\\"\", \"Definition 2: $\\\\mathbb{E}$ should denote the expectation over the joint distribution of $(\\\\tilde{X}, \\\\tilde{Y})$\", \"Definition 3: The notation could be simplified by using the usual notation $s_{(i)}$ for sorted values (with $s_{(1)} \\\\le \\\\dots \\\\le s_{(n)}$ and $s_{(\\\\frac{\\\\tau}{100} N)}$ instead of $s_{top\\\\tau\\\\%}$\", \"Definition 3: $\\\\varepsilon$ could be avoided by the case distinction $med(s)=0$ and $med(s)\\\\neq 0$\", \"Definition 4: \\\"Let $p(s)$ be the distribution of the anomaly score $s$\\\" -> Emphasize that $p(s)$ is the unknown, theoretical distribution, not the empirical distribution based on the observations in $s$\", \"Definition 5: From Def. 4 it is not clear what the role of $\\\\xi$ is. If $\\\\xi=k$, only integer-valued are allowed. In this case, the distribution is simply the discrete uniform distribution on indices $k$ with $s_k > s_{thr}$, right?\", \"Typo in Definition 6: \\\"varaince vector\\\" -> \\\"variance vector\\\"\", \"The abbreviation NPD/NDP is not used consistently (mostly NPD, in experiments NDP)\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your extensive response.\\n\\nWith the additional theoretical results and the explanation of the experiments, my main concerns have been resolved. Therefore, I change my rating accordingly.\\n\\nTo avoid potential confusion, I would suggest to clearly state that the hyperparameter tuning and model selection is done with the training data only (and that the validation data is part of the training data), while the test data is only used to evaluate the final model.\"}", "{\"title\": \"Rebuttal: Part II\", \"comment\": \"**Bound of expected FPR and FNR**\\n\\nRegarding the expected false positive/negative rate, in this revision, we provide a theoretical guarantee (Theorem 4) in Appendix B. Since the training data of UAD is not labeled at all, it is difficult to directly provide a theoretical guarantee for the performance (e.g. FPR and FNR) on the testing data and we have to make additional assumptions. For convenience, we show our new result in the following context.\\n\\n\\nTo calculate the false positive rate (FPR) and false negative rate (FNR), we need to determine a threshold for the anomaly scores given by a model $\\\\mathcal{M}$. For convenience, we give the following definitions.\\n\\n**Definition 8:**\\nLet $\\\\tau _{\\\\mathcal{M}}(z)={1}(z>c)$ be a threshold function, where $c>0$ is the threshold. Let $F _{\\\\mathcal{M}}=\\\\tau _{\\\\mathcal{M}}\\\\circ f _{\\\\mathcal{M}}:\\\\mathbb{R}^d\\\\rightarrow\\\\{0,1\\\\}$ and $\\\\mathcal{F} _{\\\\mathcal{M}}$ be the class of $F _{\\\\mathcal{M}}$ defined by $\\\\mathcal{M}$ with hyperparameters $\\\\Theta$.\\nThe FPR and FNR on the unseen testing data are then defined as $\\\\text{FPR}=\\\\mathbb{E} _{\\\\mathbf{x}\\\\sim\\\\mathcal{D} _0}[F _{\\\\mathcal{M}}(\\\\mathbf{x})]$ and $\\\\text{FNR}=\\\\mathbb{E} _{\\\\mathbf{x}\\\\sim\\\\mathcal{D} _1'}[1-F _{\\\\mathcal{M}}(\\\\mathbf{x})]$ respectively. \\n\\nWithout loss of generality and for convenience, we assume that in the problem defined by Definition 1, $N_1=0$, and in NPD, $M=N/2$. The following theorem (proved in Section C.4) provides a bound for the FPR and FNR on the unseen testing data.\\n\\n**Theorem 4:**\\nBased on Definition 8, letting $\\\\Delta=\\\\min\\\\lbrace{\\\\max _{\\\\mathbf{x}\\\\in\\\\mathcal{X} _{\\\\text{gen}}}f _{\\\\mathcal{M}}(\\\\mathbf{x})-\\\\min _{\\\\mathbf{x}\\\\in\\\\mathcal{X} _{\\\\text{gen}}}f _{\\\\mathcal{M}}(\\\\mathbf{x}),\\\\max _{\\\\mathbf{x}\\\\in\\\\mathcal{X} _{\\\\text{val}}}f _{\\\\mathcal{M}}(\\\\mathbf{x})-\\\\min _{\\\\mathbf{x}\\\\in\\\\mathcal{X} _{\\\\text{val}}}f _{\\\\mathcal{M}}(\\\\mathbf{x})\\\\rbrace}$, $\\\\varsigma=\\\\max _{\\\\mathbf{x}\\\\in\\\\mathcal{X} _{\\\\text{gen}}}f _{\\\\mathcal{M}}(\\\\mathbf{x})$, and $\\\\kappa=1.2+2(c^{-1}-\\\\frac{1}{5\\\\varsigma})\\\\sum _{\\\\mathbf{x}\\\\in\\\\mathcal{X} _{\\\\text{gen}}}f _{\\\\mathcal{M}}(\\\\mathbf{x})/N$,\\nthen over the randomness of $\\\\mathcal{X} _{\\\\text{val}}$ and $\\\\mathcal{X} _{\\\\text{gen}}$, the following inequality holds with probability at least $1-2\\\\delta$:\\n\\n$$\\n \\\\text{FPR}+\\\\text{FNR}\\\\leq \\\\kappa-\\\\frac{2\\\\Delta}{c\\\\sqrt{N}}\\\\sqrt{\\\\mathcal{V} _{\\\\text{NPD}}(\\\\mathcal{M},\\\\mathcal{X})}+\\\\frac{\\\\sqrt{2}}{2}\\\\sqrt{D _{\\\\text{KL}}(\\\\mathcal{D} _{\\\\text{gen}}||\\\\mathcal{D} _1')}\\n +\\\\widehat{\\\\mathfrak{R}} _{\\\\mathcal{X} _{\\\\text{val}}}(\\\\mathcal{F} _{\\\\mathcal{M}})+\\\\widehat{\\\\mathfrak{R}} _{\\\\mathcal{X} _{\\\\text{gen}}}(\\\\mathcal{F} _{\\\\mathcal{M}})+6\\\\sqrt{\\\\frac{\\\\log \\\\frac{2}{\\\\delta}}{N}}\\n$$\\n\\nIn the theorem, the empirical Rademacher complexities $\\\\widehat{\\\\mathfrak{R}} _{\\\\mathcal{X} _{\\\\text{val}}}(\\\\mathcal{F} _{\\\\mathcal{M}})$ and $\\\\widehat{\\\\mathfrak{R}} _{\\\\mathcal{X} _{\\\\text{gen}}}(\\\\mathcal{F} _{\\\\mathcal{M}})$ can be explicitly bounded for any $\\\\mathcal{M}$ (e.g., OC-SVM and AE) with any hyperparameters $\\\\Theta$ and the corresponding technique is fairly standard in the literature [1][2]. Due to this, together with the fact that our work AutoUAD is a framework not specialized to a single $\\\\mathcal{M}$, we will not show the $\\\\mathcal{M}$-specific computation of $\\\\widehat{\\\\mathfrak{R}} _{\\\\mathcal{X} _{\\\\text{val}}}(\\\\mathcal{F} _{\\\\mathcal{M}})$ and $\\\\widehat{\\\\mathfrak{R}} _{\\\\mathcal{X} _{\\\\text{gen}}}(\\\\mathcal{F} _{\\\\mathcal{M}})$. \\nNote that $D _{\\\\text{KL}}(\\\\mathcal{D} _{\\\\text{gen}}||\\\\mathcal{D} _1')$ can be further bounded by the similar approach used in Theorem 3. \\nOur AutoUAD finds the model with the largest $\\\\mathcal{V} _{\\\\text{NPD}}$ and hence has the potential to reduce the false positive rate and false negative rate. In addition, a smaller $D _{\\\\text{KL}}(\\\\mathcal{D} _{\\\\text{gen}}||\\\\mathcal{D} _1')$ or complexity of $\\\\mathcal{M}$ (measured as $\\\\widehat{\\\\mathfrak{R}} _{\\\\mathcal{X} _{\\\\text{val}}}(\\\\mathcal{F} _{\\\\mathcal{M}})$ and $\\\\widehat{\\\\mathfrak{R}} _{\\\\mathcal{X} _{\\\\text{gen}}}(\\\\mathcal{F} _{\\\\mathcal{M}})$)\\nmay also lead to a lower error rate.\\n\\n[1] Bartlett and Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. JMLR 2002.\\n\\n[2] Bartlett et al. Spectrally-normalized margin bounds for neural networks. NeurIPS 2017.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"We sincerely appreciate your response to our rebuttal. Your support is the greatest encouragement to us. We just updated the manuscript, in which your suggested statement is added to the end of the first paragraph of Section 4.\"}", "{\"comment\": \"We are very grateful for your feedback.\\n\\nRegarding the Gaussian distribution, we'd like to provide more clarification. \\n\\n* In our metric NPD, the Gaussian distribution is not used to approximate real datasets and is not used as an assumption for real data. NPD is real-distribution agnostic, no matter how complex the real data distribution is. \\n\\n* Our goal of using Gaussian distribution is to form a large hypersphere to enclose most of the training data, which is feasible owing to the definition $\\\\mathcal{N}(\\\\boldsymbol{\\\\mu}_ {t r n}, \\\\operatorname{diag}(\\\\boldsymbol{\\\\sigma}_{t r n}^2))$. As shown in Figure 4, we visualized partial features of four real datasets. These real datasets (blue points) are very different from Gaussian and most of the samples are enclosed by the hypersphere. \\n\\n* In the hypersphere, the regions not filled by the training data (blue points) are potential regions of anomaly. Our metric NPD measures the anomaly difference between the potential anomalous regions and the normal regions (where the blue points lie in), and hence guides the model to learn a compact decision boundary for anomaly detection. \\n\\n* The effectiveness of NPD is also theoretically supported by Theorem 2, Theorem 3, and Theorem 4.\\n\\n\\nRegarding the \\\"at most $20\\\\%$\\\" assumption, we added the following experiments.\\n\\nDue to time constraints, we selected five datasets, Breastw, Fault, Glass, Pima, and SpamBase, to conduct two sets of experiments to examine EAG's sensitivity to different anomaly ratios in the assumption.\\n\\n* First, we vary the choice of $s_{thr}$ as it is decided by the anomaly ratio $\\\\hat{r}$ in the assumption, where $s_{thr} = G^{-1}(1-\\\\hat{r})$. Note that in this case, the training data are not contaimined. The results are shown below. We see that the OCSVM is not very sensitive to $\\\\hat{r}$.\\n\\n**Table:** Test AUC and F1 scores of EAG on OCSVM and DPAD across 5 datasets with different choices of $r$ (corresponding to different possible maximum anomaly ratios)\\n\\\\begin{matrix}\\n\\\\hline \\\\hline\\n\\\\textbf{Model} & \\\\hat{r} & \\\\textbf{1\\\\\\\\%} & \\\\textbf{5\\\\\\\\%} & \\\\textbf{10\\\\\\\\%} & \\\\textbf{15\\\\\\\\%} & \\\\textbf{20\\\\\\\\%} & \\\\textbf{25\\\\\\\\%} & \\\\textbf{30\\\\\\\\%} \\\\\\\\\\\\\\\\ \\\\hline\\n{OCSVM} & AUC & 68.39 \\\\pm 19.6 & 68.37 \\\\pm 21.5 & 70.61 \\\\pm 19.8 & 69.75 \\\\pm 18.4 & 72.19 \\\\pm 19.2 & 72.54 \\\\pm 19.7 & 81.52 \\\\pm 10.9 \\\\\\\\\\\\\\\\ \\n & F1 & 60.41 \\\\pm 31.4 & 60.81 \\\\pm 31.8 & 61.67 \\\\pm 32.0 & 58.66 \\\\pm 31.6 & 57.68 \\\\pm 31.0 & 56.94 \\\\pm 30.9 & 64.79 \\\\pm 25.7 \\\\\\\\\\\\\\\\ \\\\hline\\n{DPAD} & AUC & 53.62 \\\\pm 6.1 & 55.95 \\\\pm 12.0 & 61.03 \\\\pm 21.2 & 66.22 \\\\pm 22.1 & 54.77 \\\\pm 7.2 & 69.17 \\\\pm 18.3 & 54.99 \\\\pm 7.4 \\\\\\\\\\\\\\\\ \\n & F1 & 44.97 \\\\pm 19.0 & 46.72 \\\\pm 13.8 & 53.86 \\\\pm 30.8 & 54.55 \\\\pm 22.6 & 47.88 \\\\pm 21.2 & 60.87 \\\\pm 26.6 & 45.49 \\\\pm 26.1 \\\\\\\\\\\\\\\\ \\\\hline\\\\hline\\n\\\\end{matrix}\\n\\n\\n* Second, we construct a contaminated training dataset by adding true anomalies (keep unlabeled during the training). We can see that both OCSVM and DPAD perform better when the real anomaly ratio is less than the hyperparameter $20\\\\\\\\%$ in our assumption and they perform best when the real anomaly ratio is the same as the hyperparameter $20\\\\\\\\%$. This further demonstrates the effectiveness of our assumption.\\n\\n**Table:** Test AUC and F1 scores of EAG on OCSVM and DPAD across 5 datasets with different contamination ratios.\\n\\\\begin{matrix}\\n\\\\hline\\\\hline\\n\\\\textbf{Model} & \\\\textbf{Contamination Ratio} & \\\\textbf{1\\\\\\\\%} & \\\\textbf{5\\\\\\\\%} & \\\\textbf{10\\\\\\\\%} & \\\\textbf{15\\\\\\\\%} & \\\\textbf{20\\\\\\\\%} & \\\\textbf{25\\\\\\\\%} & \\\\textbf{30\\\\\\\\%} \\\\\\\\\\\\\\\\ \\\\hline\\nOCSVM & AUC & 67.62 \\\\pm 20.4 & 61.43 \\\\pm 20.2 & 63.04 \\\\pm 21.5 & 67.42 \\\\pm 17.7 & 68.78 \\\\pm 18.3 & 60.68 \\\\pm 25.6 & 59.77 \\\\pm 23.4 \\\\\\\\\\\\\\\\ \\n & F1 & 60.54 \\\\pm 31.6 & 56.19 \\\\pm 29.9 & 57.17 \\\\pm 30.6 & 56.48 \\\\pm 35.2 & 56.07 \\\\pm 35.5 & 48.75 \\\\pm 34.4 & 53.99 \\\\pm 35.0 \\\\\\\\\\\\\\\\ \\\\hline\\nDPAD & AUC & 55.49 \\\\pm 10.9 & 58.16 \\\\pm 16.0 & 52.02 \\\\pm 24.7 & 52.86 \\\\pm 4.7 & 60.79 \\\\pm 20.9 & 48.36 \\\\pm 13.6 & 53.74 \\\\pm 26.6 \\\\\\\\\\\\\\\\ \\n & F1 & 46.77 \\\\pm 14.0 & 54.34 \\\\pm 21.1 & 49.24 \\\\pm 29.4 & 47.07 \\\\pm 20.1 & 51.86 \\\\pm 34.0 & 44.39 \\\\pm 25.2 & 51.60 \\\\pm 34.4 \\\\\\\\\\\\\\\\ \\\\hline\\\\hline\\n\\\\end{matrix}\\nThank you again and we hope this response can address your remaining concerns. We are looking forward to your further feedback.\"}", "{\"summary\": \"Unsupervised anomaly detection (UAD) faces challenges in model selection and hyper-parameter tuning due to the lack of labeled anomalies and dataset variability. This work introduces three evaluation metrics\\u2014relative-top-median, expected-anomaly-gap, and normalized pseudo discrepancy (NPD)\\u2014to estimate model performance without labels. Using Bayesian optimization, these metrics streamline UAD tuning, showing effectiveness across 38 datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-written.\", \"The focus on hyper-parameter optimization in UAD is important and well-motivated.\", \"The proposed method is novel.\", \"Extensive experiments effectively demonstrate the method\\u2019s effectiveness.\"], \"weaknesses\": [\"The details of AutoUAD are complex and difficult to understand.\"], \"questions\": \"Please see Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper aims at improving hyperparameter optimization of unsupervised anomaly detection methods. To that end, it first proposes three different metrics that can be used to estimate the performance of UAD methods at testing. The metrics are used within a Bayesian optimization framework to estimate the hyperparameters of different UAD methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The work addresses a relevant problem that requires further attention from the literature\", \"A large set of datasets are used for evaluationLarge set of datasets considered for evaluation.\"], \"weaknesses\": [\"The paper lacks clarity. As pointed in the questions section, there are multiple sentences that seem incomplete, which makes difficult to understand the concept being explained.\", \"There are some of the statements and claims that are contradictory across the paper. For example, the paper claims to be unsupervised and criticizes previous works that claim to be unsupervised but still require some form of supervision: this work aims at fixing this. Nonetheless, Definition 2 and remark 3.2 introduce labels for normal and abnormal points ($\\\\mathcal{Y}$) and in the explanation of Definition 4 there seems to be the notion of supervision as there is the idea of true anomalies.\", \"Other claims lack a justification (see question 3 regarding assumption 2).\", \"The assumption of an isotropic Gaussian distribution for the generated dataset may be over-simple. Real-world datasets may have more complex dynamics.\"], \"questions\": \"**Questions:**:\\n1. In Eq 4, if the median (median(s)) is close to zero, the selected value of $\\\\epsilon$ may induce some instability of V_rtm. How do you deal with those cases?\\n2. In definition 1, what is $\\\\mathcal{D}'_1$? \\n3. Assumption 2 lacks justification. What leads to conclude that 20% of the points on any given dataset are similar to true anomalies?\\n4. What is $\\\\xi$ in Eq 5? \\n5. \\\"In many practical scenarios, such as medical diagnosis and mechanical fault detection, noisy or wrongly collected data constitute a small portion\\\" -> Do you have evidence that backs this claim?\\n6. If Bayesian optimization, according to the claims, works on supervised setting, how can it be used in this setup that is AutoUAD?\\n\\n**Comments:**\\n- The formulation of definition 1 has some flaws. If one says that N points were drawn from a distribution it means that they were effectively drawn. One cannot then say that N is unknown. Perhaps what is really meant is that points in $\\\\mathcal{X}$ come from two different distributions, but it is not possible to determine which points come from which (D_0 or D_1). Please reformulate to the latter clearly stating where points come without the need of mentioning the quantities (N). Also, explain what is the difference between $\\\\mathcal{D}_1$ and $\\\\mathcal{D}'_1$.\\n- Assumption 1 seems to be missing something : \\\"... will assign the majority of data points with low anomaly scores\\\" to what? same for the minority. Please complete/\\n- What is the point of Figure 2? it does not seem to match what is written in the text. What makes the difference between, for instance, fig 2 a and b?\\n- The observations from assumption 1 and Figure 2 are somehow trivial. This is the typical assumption in most anomaly detection setups.\\n- There is some text missing in definition 4 (\\\"there exists and the variance..\\\" -> there exists what?) that makes text incomprehensible. Please revise the sentence. \\n\\n**Minor comments:**\\n- \\\"Histograms of training anomaly scores of models with low high testing AUCs\\\" -> with low and high?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal: Part IV\", \"comment\": \"**Q8, 9, 11, F7, and F8:** We apologize for the typo and inconsistency in the main text. We notice this is a serious typo and it makes the reading of the whole section difficult. We revise Definition 4 by changing $s_k$ into $\\\\xi$, making AG a function related to $\\\\xi$ (a value from the distribution of $s$) with a parameter $p(s)$. Furthermore, we emphasize that $p(s)$ is an unknown and theoretical distribution of the anomaly score $\\\\boldsymbol{s}$ to reduce the ambiguity.\\n\\n**Q10:** The intuition of AG is similar to RTM. We hope a good UAD model assigns relatively high anomaly scores to a small portion of training samples. AG utilizes the variance information of the two groups of anomaly scores split by a threshold to avoid the case explained in Q7. In our new revision, $w_0(k)$ and $w_1(k)$ are replaced by $w_0(\\\\xi)$ and $w_1(\\\\xi)$, respectively. $\\\\xi$ is a threshold value in the domain of $s$. In a discrete form, $w_0(\\\\xi)$ and $w_1(\\\\xi)$ are the |{$s_i| s_i < \\\\xi $ }| $/ N$ and |{$ s_i | s_i \\\\ge \\\\xi $ }| $/ N$, respectively. They are naturally the weight. \\n\\n**Q12:** Using AG solely to evaluate the UAD model depends on the choice of a sensitive hyper-parameter, i.e., the threshold ($\\\\xi$ in the new version). By taking the expectation, we hope to reduce the sensitivity. When choosing a lower threshold, the output AG seems meaningless to evaluate a UAD model. So, we design to use the conditional expectation to reduce the bias from choosing lower thresholds. Surely it will reduce the expectation, but this is what we want. Otherwise, the distinction between the \\\"good\\\" and \\\"bad\\\" UAD model will be small in terms of EAG.\\n\\n**Q13:** No. Train-val-test split cross-validation is non-trivial in the unsupervised learning setting. In the supervised learning tasks, a **labeled** validation set can be used to prevent overfitting. In contrast, the validation set in the unsupervised task is still **unlabeled**. No ground-truth information can be revealed using a validation set. We tried to simply apply RTM and EAG on a **unlabeled** validation set (using 30\\\\% of $\\\\mathcal{X}$, consistent with NPD in the experiment), yet the performance of the selected model remains almost unchanged or underperforming. We show the average testing AUC results through 10 datasets (Shuttle, Arrhythmia, Fault, Glass, Hepatitis, InternetAds, Ionosphere, Landsat, Letter, and Lymphography) using RTM below.\\n\\n### Table: Average testing AUC through 10 datasets.\\n\\n| **Model** | **RTM (Original)** | **RTM (Selected on Validation)** |\\n|--------------|----------------------|-----------------------------------|\\n| **OCSVM** | $72.61 \\\\pm 21.8$ | $72.98 \\\\pm 21.5$ |\\n| **DPAD** | $76.56 \\\\pm 17.1$ | $73.4 \\\\pm 16.9$ |\\n\\nAnother more important limitation of RTM and EAG is they have an additional hyper-parameter. Therefore, we propose NPD.\\n\\n**Q14 and W1: strong assumption:** In Definition 6, we do not assume the \\\"normal\\\" data is approximated by an isotropic Gaussian. We illustrate the working flowchart in https://anonymous.4open.science/r/AutoUAD-FDFC/AutoAD_flowchart.pdf for your better understanding. In NPD, we use the **unlabeled** $\\\\mathcal{X} _{val}$ to represent a \\\"normal\\\" set as it is indeed a subset of the normal data $\\\\mathcal{X}$. \\nWe then generate an extra dataset $\\\\mathcal{X} _{gen} \\\\sim \\\\mathcal{N}(\\\\boldsymbol{\\\\mu} _{trn}, \\\\text{diag}(\\\\sigma _{trn}^2))$. **The intuition behind this is that we hope $\\\\mathcal{X} _{gen}$ can contain samples very close to real anomalies or that are not similar to the \\\"normal\\\" data**. A good UAD model should be able to distinguish the difference between $\\\\mathcal{X} _{val}$ and $\\\\mathcal{X} _{gen}$.\\n\\n**Q15:** As justified in Q14's response, we do not pose any assumption on the training data. Real-world datasets are more complex since their feature can be correlated non-linearly. Actually, we considered the correlation between coordinates in real-world data, i.e., in $\\\\mathcal{X} _{trn}$. For this reason, we select isotropic Gaussian, which has higher entropy to generate diverse samples that are not similar to $\\\\mathcal{X} _{trn}$. It is also justified in Fig. 3, 4, and 7. It shows $\\\\mathcal{X} _{gen}$ is not similar to $\\\\mathcal{X} _{trn}$ and $\\\\mathcal{X} _{gen}$ often contains samples close to the real anomalies. In Theorem 3, we theoretically show the divergence between $\\\\mathcal{X} _{gen}$ and $\\\\mathcal{X} _{trn}$.\\n\\n**Q16:** The outline of $\\\\mathcal{X} _{gen}$ (not $\\\\mathcal{X} _{trn}$) is generally a hypersphere because $\\\\mathcal{X} _{gen}$ is generated from an isotropic Gaussian. In contrast, the outline of $\\\\mathcal{X} _{trn}$ can be very different from a hypersphere because the variance across different coordinates varies. We use Fig. 4 to show that $\\\\mathcal{X} _{trn}$ is roughly enclosed by the hypersphere determined by $\\\\mathcal{X} _{gen}$, though \\\"normal\\\" (blue) point has a strong correlation across different coordinates.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nSince the author-reviewer discussion period is coming to an end, we'd like to know if our additional response addressed your doubt and question or not. Thank you for your time.\\n\\nRegards,\\n\\nAuthors\"}", "{\"title\": \"Rebuttal: Part III\", \"comment\": \"**Comment 1: Definition 1:** Thanks for your comment. What we want to express is exactly the same as you suggested. The misleading may be caused by that there are three numbers $N,N_0,N_1$ in the definition. $N$ is actually known since it is the total number of samples in our training dataset $\\\\mathcal{X}$. In contrast, $N_0$ and $N_1$ are unknown, though $N_0 + N_1 = N$. We think it is sufficient to express \\\"it is not possible to determine which points come from which ($D_0$ or $D_1$)\\\". Please feel free to let us know if this is still not clear enough.\\n\\n**Comment 1: difference between $\\\\mathcal{D}_1$ and $\\\\mathcal{D}'_1$:** \\n$\\\\mathcal{D} _1$ is the distribution of the anomalies in the training data if $N_1\\\\neq 0$. $\\\\mathcal{D}'_1$ is the distribution of anomalies in the (unseen) test data. The distribution of anomalies in the test data is not necessarily identical to the distribution of possible anomalies in the training data. That's why we introduce $\\\\mathcal{D}'_1$ in addition to $\\\\mathcal{D}_1$.\\n\\n\\n**Comment 3:** Fig. 2(a) is an OCSVM with a low test AUC score. It does not satisfy Assumption 1 because the majority of training samples have high anomaly scores. Fig. 2(b) is an OCSVM with a high test AUC score, where the majority of training samples have relatively low anomaly scores. It satisfies the Assumption 1. By comparing Fig. 2(c) and Fig. 2(d), although both of them seem to satisfy Assumption 1, the model with higher test AUC assigns higher anomaly scores to a small percentage of data described in line 235 (new version). So, models with high testing AUC should be evaluated with a higher metric value like RTM and EAG. We believe the figures are consistent with the main text.\\n\\nThe reason for the differences between them is the use of **different hyper-parameters** of the model trained on the same training data. That's why we consider the hyper-parameter optimization is challenging.\\n\\n**Comment 4:** Thanks for the comment. Since it is common in UAD research, we adopt such an assumption here to propose the RTM metric. However, such a metric requires setting additional hyper-parameter, e.g. $\\\\tau$, making the performance empirical. Hence, we further propose NPD metric that does not require an additional hyper-parameter.\\n\\n**Comment 5** We apologize for the typo. The correct sentence should be \\\"..., there exists $\\\\xi$ such that $P(s \\\\ge \\\\xi)$ is small, and ...\\\" please see our revised Eq. (5).\\n\\nWe hope our response can be helpful in addressing your concerns. Please do not hesitate to let us know if any of your concerns haven't been properly addressed or you have further questions.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Dear Reviewer 7zTy:\\n\\nThank you so much for recognizing the novelty and effectiveness of our method. \\nRegarding the weakness you mentioned, we here explain the whole pipeline of AutoUAD for your better understanding. Particularly, we draw an intuitive figure to show the flow chart of AutoUAD at https://anonymous.4open.science/r/AutoUAD-FDFC/AutoAD_flowchart.pdf.\\n\\n**Motivation:** In UAD, since the training data are unlabeled, optimizing the hyper-parameters of any UAD method and comparing between different UAD methods are both challenging. We need to establish some metrics to evaluate the quality of UAD methods by utilizing the unlabeled training data only, **without accessing any labeled data**.\\n\\n**Contributions:** \\n- In this paper, we propose two internal evaluation metrics, relative-top-median (RTM) and expected-anomaly-gap (EAG), and one semi-internal evaluation metric, normalized pseudo discrepancy (NPD), to measure the quality of a UAD model based on the **unlabeled** training data. \\n\\n- We implement automated UAD using Bayesian optimization, for which the objective is maximizing one of RTM, EAG, and NPD, with respect to the hyperparameters of UAD methods. Therefore, our AutoUAD automatically and efficiently selects the possibly best hyper-parameters for UAD methods.\\n\\n- We provide theoretical guarantees for our NPD metric to ensure feasibility and reliability. In this revision, we provide one more theorem (Theorem 4 in Appendix B) that states that maximizing our NPD metric can reduce the upper bound of the false positive rate and false negative rate.\\n\\n- Extensive empirical experiments on 38 benchmark datasets (detailed in Appendix D) show the proposed NPD metric consistently outperforms existing model selection heuristics and works well on complex state-of-art UAD algorithms.\\n\\n**AutoUAD Details:** We add a figure of a working flowchart for your easier understanding of AutoUAD in https://anonymous.4open.science/r/AutoUAD-FDFC/AutoAD_flowchart.pdf. \\nIn the training phase, candidate UAD models $\\\\mathbb{M}$ contain multiple UAD algorithms $\\\\{\\\\mathcal{M} _1, \\\\mathcal{M} _2, ..., \\\\mathcal{M} _C \\\\}$, and each algorithm $\\\\mathcal{M} _i$ has multiple candidate hyper-parameters:\\n$\\\\{\\\\Theta _i^{(1)}, \\\\Theta _i^{(2)}, ... , \\\\Theta _i^{(j)}, ...\\\\}, \\\\Theta _i^{(j)} \\\\in \\\\prod^{H_i} _{k=1} \\\\mathcal{S} _k^{(i)}.$\\nFor each $\\\\mathcal{M}_i$, the Bayesian optimizer will select hyper-parameters to train the UAD model. After training ends, the model is evaluated by our (semi-)internal metrics (RTM, EAG, or NPD) using the training set where no ground-truth label is required. The evaluation output $\\\\mathcal{V}(\\\\mathcal{M}_i, \\\\mathcal{X})$ provides feedback to the Bayesian optimizer to select new hyper-parameters for the next round. During the whole training process, no ground-truth label is required.\\n\\nFor internal metrics, RTM and EAG, we obtain $\\\\boldsymbol{s}$ from the trained UAD model $\\\\mathcal{M}_i(\\\\Theta_i^{(j)})$ inferred on the training dataset $\\\\mathcal{X}$. Then, RTM and EAG can be calculated using Eq. (4) and Eq. (6), respectively.\\n\\nFor the proposed semi-internal metric NPD, the training dataset is randomly split into $\\\\mathcal{X} _{trn}$ and $\\\\mathcal{X} _{val}$ before the training phase. \\nAn extra dataset $\\\\mathcal{X} _{gen}$ is generated from an isotropic Gaussian distribution:\\n$\\n\\\\mathcal{N}(\\\\boldsymbol{\\\\mu} _{trn}, \\\\text{diag}(\\\\boldsymbol{\\\\sigma}^2 _{trn})),\\n$\\nwhere $\\\\boldsymbol{\\\\mu} _{trn}$ and $\\\\boldsymbol{\\\\sigma}^2 _{trn}$ are the mean and variance vectors of $\\\\mathcal{X} _{trn}$.\\nThe dataset $\\\\mathcal{X} _{trn}$ is used to train the UAD model. After training, anomaly scores $\\\\boldsymbol{s} _{val}$ and $\\\\boldsymbol{s} _{gen}$ are computed from $\\\\mathcal{X} _{val}$ and $\\\\mathcal{X} _{gen}$, respectively. Then, NPD is calculated by taking $\\\\boldsymbol{s} _{val}$ and $\\\\boldsymbol{s} _{gen}$ as input using Eq. (7).\\nNote that $\\\\mathcal{X} _{val}$ is still an **unlabeled** subset of $\\\\mathcal{X}$, and $\\\\mathcal{X} _{gen}$ is just random noise. These datasets, $\\\\mathcal{X} _{val}$ and $\\\\mathcal{X} _{gen}$, are only used for semi-internal evaluation and are not involved in model training.\\n\\nThe intuition behind NPD is that we hope $\\\\mathcal{X} _{gen}$ contains samples that are not similar to the normal data $\\\\mathcal{X}$. The model should be able to distinguish the difference between $\\\\mathcal{X} _{val}$ and $\\\\mathcal{X} _{gen}$.\\nWe argue that $\\\\mathcal{X} _{gen}$ can contain samples close to real anomalies, so that NPD can highlight the significance of evaluating a good UAD model. This claim is justified in Theorem 3.\\n\\nIn the testing stage, the selected model ($\\\\mathcal{M}_i(\\\\Theta_i^*)$ or $\\\\mathcal{M}^*(\\\\Theta_i^*)$) is evaluated by $\\\\mathcal{E}$ (AUC/F1 metric) using the testing set with the ground-truth label to show the effectiveness of our methods.\\n\\nWe hope the clarification and the newly added flowchart are helpful for your understanding. We are looking forward to your further feedback.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"Rebuttal: Part I\", \"comment\": \"Dear Reviewer DpZ3,\\n\\nWe sincerely thank your valuable comments. It seems that there are a few misunderstandings (e.g. Weakness 2 and Weakness 4). Our detailed responses to your comments are as follows.\\n\\n**Weakness 1:** Thank you so much for pointing out these issues of writing. We have fixed them in the revision. The revised details are listed below:\\n\\n- **Question 2:** $\\\\mathcal{D}'_1$ is the distribution of true anomalies in the test data, while $\\\\mathcal{D} _1$ is the distribution of possible anomalies (unlabeled) in the training data. We use both $\\\\mathcal{D} _1$ and $\\\\mathcal{D}'_1$ since the distribution of anomaly in the test data is not necessarily identical to the distribution of possible anomaly in the training data, which is a standard assumption in unsupervised anomaly detection. Moreover, the training data may not contain any anomalies.\\n\\n- **Question 4:** We notice this is a typo and it makes the reading of the whole section difficult. We revise Definition 4 by changing $s_k$ into $\\\\xi$, making AG a function related to $\\\\xi$ (a value from the distribution of $s$) and $p(s)$. Furthermore, we emphasize that $p(s)$ is an unknown and theoretical distribution of the anomaly score $\\\\boldsymbol{s}$ to reduce the ambiguity.\\n\\n- **Comment 2:** We apologize for the misleading. This is a grammar error. The correct sentence should be \\\"A good UAD model will assign low anomaly scores to the majority of data points, and assign relatively high anomaly scores to the minority of data points.\\\"\\n\\n- **Minor Comments:** Thanks for your careful reading. This is a typo. It should be \\\"Histograms of training anomaly scores of models with low **and** high testing AUCs\\\"\\n\\n**Weakness 2:** \\\"Definition 2 and remark 3.2 introduce labels for normal and abnormal points... \\\"\\n\\n**Response:** The notation of $\\\\tilde{\\\\mathcal{X}}$ and $\\\\tilde{\\\\mathcal{Y}}$ represents the **testing dataset**. \\n* Our experiments and indeed all UAD research require a labeled test set to evaluate the effectiveness of final UAD models. In our work, we use the test dataset to evaluate the performance of the hyper-parameter-optimized UAD model. In the training stage and hyperparameter selection stage, we never use labeled data. \\n* Moreover, we introduce $\\\\tilde{\\\\mathcal{X}}$ and $\\\\tilde{\\\\mathcal{Y}}$ in Definition 2 and formula (3) because we were presenting the goal of AutoUAD and the surrogate function $\\\\mathcal{V}\\\\left(\\\\mathcal{M}_i, \\\\mathcal{X}\\\\right)$. The goal of AutoUAD is to perform hyperparameter optimization and model selection on (unlabeled) training data to ensure high accuracy on the unseen testing data.\\n\\nTo give a comprehensive understanding of which part we use the labeled data, we provide a flowchart of AutoUAD at https://anonymous.4open.science/r/AutoUAD-FDFC/AutoAD_flowchart.pdf, where no ground-truth label is required during the training.\\n\\n**Weakness 2:** \\\"Definition 4 there seems to be the notion of supervision ...\\\"\\n\\n**Response:** Sorry for the misleading. Here, we would like to express that in an ideal case, selecting the threshold value depends on a prior anomaly ratio. Here, however, we do not have such prior information. As the following sentence justified, \\\"due to ... and the lack of supervision, ...\\\", threshold selection becomes an additional hyper-parameter which makes AG and even EAG not work well in the experiment. Therefore, no supervision is involved.\\n\\n**Weakness 3 and Question 3:** Thanks for your suggestions. As many Unsupervised Outlier/Anomaly detection literature assumed, the majority of the training data is normal. For the specific anomaly ratio in the dataset, scholars often make assumptions about the anomaly ratio [1, 2]. One can also refer to [3] to estimate the anomaly ratio in the dataset. Hence, we claim the assumption is proper. Nevertheless, such an assumption can also be regarded as a hyper-parameter in EAG metric. So, we further propose NPD metric which does not rely on Assumptions 1 and 2.\\n\\n[1] Nicolas Goix. How to evaluate the quality of unsupervised anomaly detection algorithms? ICML Workshop 2016.\\n\\n[2] Qiu et al. Latent outlier exposure for anomaly detection with contaminated data. ICML 2022.\\n\\n[3] Li et al. Deep anomaly detection under labeling budget constraints. ICML 2023.\\n\\n**Weakness 4:** Thanks for the comment. The isotropic Gaussian is not an assumption for the real-world dataset. In contrast, we intend to use isotropic Gaussian to form a region (hypersphere) to enclose the training data. In the hypersphere, the regions without training data are the regions where unseen anomalies may fall. We use the isotropic Gaussian to guide the model to learn a tight decision boundary. Real-world datasets are more complex. Our idea is also justified in Figs. 3, 4, and 7. It shows $\\\\mathcal{X} _{gen}$ is not similar to $\\\\mathcal{X} _{trn}$ and often contains samples close to the real anomalies.\\nWe added one more theorem (Theorem 4 in Appendix B) that provides a theoretical guarantee (error bound) for our method.\"}", "{\"title\": \"Rebuttal: Part II\", \"comment\": \"**Question 1:** This is really an insightful question.\\nWe set $\\\\epsilon=1e-9$ in the experiments and we barely observed the case median(s) close to 0. The following table provides some statistical information, where the median of $\\\\mathbf{s}$ is much larger than our $\\\\epsilon$. In other words, compared with the $\\\\text{median}(\\\\boldsymbol{s})$, $\\\\epsilon$ is always sufficiently small. In the 500 BO searches, no occurrences ($0\\\\%$) that the $\\\\text{median}(\\\\boldsymbol{s})$ falls into the range $[-1e-6,1e-6]$. Therefore, the bias or instability given by $\\\\epsilon$ is tiny. \\n\\nThe small $\\\\text{median}(\\\\boldsymbol{s})$ may be a shortcoming of our $\\\\mathcal{V}_ {\\\\text{RTM}}$ but the other two metrics ($\\\\mathcal{V}_ {\\\\text{EAG}}$ and $\\\\mathcal{V}_ {\\\\text{NPD}}$) we proposed do not have this shortcoming. More importantly, as shown by the experiments (e.g. Table 1), $\\\\mathcal{V}_ {\\\\text{NPD}}$ is much more effective than $\\\\mathcal{V}_ {\\\\text{RTM}}$.\\n\\n**Table:** Statistic information about training anomaly score **$\\\\boldsymbol{s}$** for 4 UAD methods on the Satellite dataset over 500 BO searches.\\n\\n| **UAD Model** | **Avg. Median($\\\\boldsymbol{s}$)** | **Min($\\\\boldsymbol{s}$)** | **Max($\\\\boldsymbol{s}$)** |\\n|---------------|-----------------------------------|---------------------------|---------------------------|\\n| AE | $1.628 \\\\pm 0.246$ | 0.5698 | 9.155 |\\n| DeepSVDD | $0.7376 \\\\pm 0.213$ | 0.0087 | 10.03 |\\n| OCSVM | $0.4013 \\\\pm 6.26$ | -94.99 | 554.8 |\\n| DPAD | $0.0026 \\\\pm 0.0002$ | 0 | 0.1223 |\\n\\n\\n**Question 5:** Anomaly detection, also known as novelty detection or rare event detection, means the anomaly is rare. It is common in many real applications, such as security (intruders), geology (earthquake), food control (foreign objects), economics (bankruptcy), and neuroscience (an unexperienced stimulus) [1]. In addition, consider a cloud computing system requiring high system availability ($\\\\ge 99.9$\\\\%) [2]. Then, the collected system log may only contain $0.1$\\\\% system failure data.\\n\\nThese anomaly events themselves are rare. If they are unexpectedly considered as \\\"normal\\\" data, it is still a very small portion. We hope the evidence can support our claim.\\n\\n[1] Ander et al. Analyzing rare event, anomaly, novelty and outlier detection terms under the supervised classification framework. Artificial Intelligence Review 53 (2020): 3575-3594.\\n\\n[2] Kai Hwang. Cloud computing for machine learning and cognitive applications. Mit Press, 2017, Chapter 1, 1.4.\\n\\n**Question 6:** Although Bayesian optimization is usually used for tuning the hyperparameters in supervised learning, it can be used for unsupervised learning **when we derive an effective surrogate function or metric to evaluate the model performance using the unlabeled data only**. For instance, [3] used Bayesian optimization to search hyperparameters for spectral clustering algorithms, which are unsupervised. \\n\\nIn our work, as described in the text around formula (3), we want to construct some $\\\\mathcal{V}\\\\left(\\\\mathcal{M}_i, \\\\mathcal{X}\\\\right)$ to evaluate the model performance using the unlabeled training data only and we hope that $\\\\mathcal{V}\\\\left(\\\\mathcal{M}_i, \\\\mathcal{X}\\\\right)$ is a good proxy for the expected testing error (never known). Therefore, we provided three examples of $\\\\mathcal{V}\\\\left(\\\\mathcal{M}_i, \\\\mathcal{X}\\\\right)$ including RTM, EAG, and NPD, which are calculated using the unlabeled training data only. We then use Bayesian optimization to maximize the three metrics with respect to the hyperparameters of UAD methods.\\n\\nTo make the general idea of applying Bayesian optimization to UAD more intuitive and clearer, we provide a working flowchart in https://anonymous.4open.science/r/AutoUAD-FDFC/AutoAD_flowchart.pdf.\\n\\n[3] Fan et al. A simple approach to automated spectral clustering. NeurIPS 2022.\"}", "{\"comment\": \"Dear Reviewer 7zTy,\", \"did_our_explanation_for_autouad_and_the_flowchart__https\": \"//anonymous.4open.science/r/AutoUAD-FDFC/AutoAD_flowchart.pdf address your concern? Please do not hesitate to let us know if there is still anything unclear or if you have further questions.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"metareview\": \"Based on the reviews, I recommend accepting the paper. All three reviewers, who are domain experts, have suggested acceptance and expressed high confidence in their evaluations. Notably, one reviewer has provided an exceptionally detailed review, offering 30 constructive suggestions and technical comments. The reviewers highlight several strengths of the work, including its high-quality empirical evaluation and its significance to the ICLR community.\", \"additional_comments_on_reviewer_discussion\": [\"The main points raised by the reviewers focused on clarity, assumptions, methodology, and experimental results.:\", \"**Reviewer 7zTy** initially found the AutoUAD pipeline complex and difficult to understand. The authors responded by clarifying the pipeline, which addressed the concern and led the reviewer to increase the rating.\", \"**Reviewer DpZ3** pointed out issues with clarity, contradictions, and the assumption of an isotropic Gaussian distribution. The authors provided detailed explanations and additional results, particularly addressing concerns about the Gaussian assumption. This satisfied the reviewer, who revised the rating.\", \"**Reviewer FcaY** questioned the strong assumptions and the overly optimistic experimental results. The authors responded with comprehensive clarifications, including theoretical justifications and additional results, which resolved the concerns and led the reviewer to change the rating to acceptance.\", \"In weighing these points, I found the authors' responses to key concerns satisfactory. While clarity and assumptions were initially questioned, the authors addressed these through revisions and additional results.\"]}", "{\"title\": \"Rebuttal: Part V\", \"comment\": \"**Q17:** We feel sorry for the misleading. In Figure 6, the upper subfigures (first row) are the values of AUC on the test set. The lower subfigures (second row) are the values of metrics (NPD, EM/MV, EAG, RTM) evaluated on the training set. It is seen that the NPD is positively proportional to the AUC. We add the clarification and y-axis explicitly.\\n\\n**Q18 and W1: overly optimistic:** No. Please see our newly added flowchart of AutoUAD for better understanding in https://anonymous.4open.science/r/AutoUAD-FDFC/AutoAD_flowchart.pdf, where the datasets used for training and testing are separated. The data used for model selection is the **unlabeled** training data $\\\\mathcal{X}$. Even in NPD, the $\\\\mathcal{X}_{val}$ is a **unlabeled** subset. Hence, we argue our results are not overly optimistic.\\n\\n**Q19:** In our UAD setting, also known as one-class-classification, the training data are considered normal (may contain a few unlabeled anomalies). Therefore, for any dataset, we only use the \\\"normal\\\" data to perform training. The test data contains both normal samples and anomalous samples. See our implementation details in Appendix F. As for the anomaly ratio bound in Assumption 2, we assume no more than 20\\\\% of the \\\"normal\\\" data behave like an anomaly. This is often caused by mislabeling and noise in the \\\"normal\\\" data. We claim assumption 2 is very mild. So, benchmark datasets with higher anomaly ratios do not violate Assumption 2 because, during the training phase, we never know the anomaly ratio in the testing phase.\\n\\n**F1, 2, 3, 4, 5, 9:** Thanks for your nice suggestions. We have revised the related content in the main text.\\nFor instance, we change the normal distribution in Definition 1 into \\\"distribution of normal data\\\". We change the standard normal distribution in Figure 3 into \\\"standard Gaussian distribution\\\"\\n\\n\\n**F6:** Thanks for your suggestions. If $\\\\text{median}(\\\\boldsymbol{s})=0$, the denominator can be replaced by the smallest non-zero value in $\\\\boldsymbol{s}$. If all values are 0, we set $RTM=0$. In the experiment, we barely observed the case median(s) close to 0. Please see the statistical information in the table below. The median is larger than 0 in most cases. In all experiments, we set $\\\\epsilon=1e-9$. Compared with the $\\\\text{median}(\\\\boldsymbol{s})$, $\\\\epsilon$ is always sufficiently small. In the 500 BO searches, no occurrences ($0\\\\%$) that the $\\\\text{median}(\\\\boldsymbol{s})$ falls into the range $[-1e-6,1e-6]$. \\n\\n### Table: Statistic information about training anomaly score **$\\\\boldsymbol{s}$** for 4 UAD methods on the Satellite dataset over 500 BO searches.\\n\\n| **UAD Model** | **Avg. Median($\\\\boldsymbol{s}$)** | **Min($\\\\boldsymbol{s}$)** | **Max($\\\\boldsymbol{s}$)** |\\n|---------------|-----------------------------------|---------------------------|---------------------------|\\n| AE | $1.628 \\\\pm 0.246$ | 0.5698 | 9.155 |\\n| DeepSVDD | $0.7376 \\\\pm 0.213$ | 0.0087 | 10.03 |\\n| OCSVM | $0.4013 \\\\pm 6.26$ | -94.99 | 554.8 |\\n| DPAD | $0.0026 \\\\pm 0.0002$ | 0 | 0.1223 |\\n\\n**F10:** Thanks for your careful reading. NPD is correct, we have revised the text and figures.\\n\\n**We are looking forward to your feedback and please do not hesitate to let us know if there are any concerns or questions still not properly addressed. We are always here and eager to response to any of your questions. Thank you.**\"}", "{\"comment\": \"Many thanks for the effort to prepare the detailed rebuttal. I have updated the score, reflecting some doubts I still have on the usage of a Gaussian distribution. As the authors claim, real datasets are much more complex and the paper fails to explain why this would still work.\\n\\nSecond, if 20% of anomalies is a hyperparameter, how sensitive the method is to it?\"}", "{\"title\": \"Rebuttal: Part I\", \"comment\": \"Dear Reviewer FcaY,\\n\\nWe sincerely thank your valuable comments. There seem to be **two big misunderstandings** regarding the weakness if we understand your comments correctly.\\n\\n**On assumptions** \\\\\\nYou commented that \\\"The proposed metrics seem to work only under strong assumptions\\\". In our paper, we made two assumptions:\\n * Assumption 1: A good UAD model will assign the majority of data points with low anomaly scores, and assign the minority of data points with relatively high anomaly scores.\\n * Assumption 2: (Low-Quality Data Upper Bound) At most $20\\\\%$ data points in the training set are similar to the true anomalies, in which the UAD model gives them higher anomaly scores.\\n\\nAssumption 1 is commonly used in unsupervised anomaly detection. It is really a very weak assumption. Assumption 2 is also a weak assumption in unsupervised anomaly detection, where we usually assume that most of the training data are normal. (Note that here we use \\\"At most $20$\\\\%\\\", not \\\"$20$\\\\%\\\".) \\n\\nIt is also worth mentioning that Assumption 1 and Assumption 2 are for our first two metrics RTM and EAG respectively. Our third metric NPD does not rely on any assumption. We generated $\\\\mathcal{X} _{gen}$ from an isotropic Gaussian with the same mean and variance vectors as $\\\\mathcal{X}$ but we never assume that real data are close to $\\\\mathcal{X} _{gen}$. We only use $\\\\mathcal{X} _{gen}$ to form a hypersphere roughly enclosing $\\\\mathcal{X}$, as shown in Figure 4 and Figure 7. More importantly, the effectiveness of our methods is demonstrated by 38 datasets from diverse fields, which also indicates that our methods do not rely on strong assumptions.\\n\\n**On reported results**\\\\\\nYou commented that \\\"the results of experiments seem to be overly optimistic with results being reported on the same data that was used for model selection and hyperparameter tuning\\\". We'd like to clarify that in the experiments for every dataset, there is a training set and a testing set, and the training set is unlabeled. We performed model selection and hyperparameter tuning on the training set and obtained the final optimal model. The results we reported in the tables and figures are obtained by applying the final optimal model to the testing set.\"}", "{\"title\": \"Rebuttal: Part III\", \"comment\": \"**Q1:** Thank you for your suggestion. Clustering and dimensionality reduction are another important unsupervised learning task. We add the following paragraph in the Related Work section:\\n\\nNote that hyperparameter tuning is a challenge in other unsupervised learning tasks as well. (Halkidi \\\\& Vazirgiannis, 2001; Poulakis, 2020; Fan et al., 2022) presented some clustering validity metrics\\nto guide the hyper-parameter search using grid search or Bayesian optimization. (Lin \\\\& Fukuyama,2024; Liao et al., 2023) also used Bayesian optimization to tune the hyperparameter in dimensionality reduction methods such as t-SNE.\\n\\n[1] Halkidi and Vazirgiannis. Clustering validity assessment: Finding the optimal partitioning of a data set. ICDM 2001.\\n\\n[2] Giannis Poulakis. Unsupervised automl: a study on automated machine learning in the context of clustering. 2020.\\n\\n[3] Fan et al. A simple approach to automated spectral clustering. NeurIPS 2022.\\n\\n[4] Liao et al. Efficient and robust bayesian selection of hyperparameter. arXiv:2306.00357, 2023.\\n\\n**Q2:** Take RTM as an example, $\\\\tau$ is a hyper-parameter. Due to time constraints, we perform sensitivity analysis for RTM with varying $\\\\tau$ in $[50, 30, 20, 10, 5, 3, 1]$ tested on 37 datasets (ALOI is dropped due to its size) using DPAD and OCSVM. An average result is summarized in Figure 10 of Appendix L and also shown in the following table https://anonymous.4open.science/r/AutoUAD-FDFC/AutoAD_sensitivity.pdf. It is seen that compared to OCSVM, the performance of DPAD is more sensitive to $\\\\tau$. \\n\\n### Table: Average test AUC RTM on DPAD and OCSVM across 37 datasets (ALOI is dropped due to its size).\\n\\n| **$\\\\tau$** | **50** | **30** | **20** | **10** | **5** | **3** | **1** |\\n|--------------|------------------|------------------|------------------|------------------|------------------|------------------|------------------|\\n| **DPAD** | $69.84 \\\\pm 20.3$ | $73.5 \\\\pm 19.2$ | $74.03 \\\\pm 18.5$ | $73.75 \\\\pm 16.0$ | $75.61 \\\\pm 15.1$ | $72.95 \\\\pm 18.9$ | $74.15 \\\\pm 19.2$ |\\n| **OCSVM** | $76.95 \\\\pm 20.3$ | $77.44 \\\\pm 20.8$ | $76.77 \\\\pm 20.8$ | $76.77 \\\\pm 20.7$ | $76.12 \\\\pm 20.9$ | $76.49 \\\\pm 20.8$ | $76.96 \\\\pm 20.1$ |\\n\\n**Q3:** The ground-truth labels is in $\\\\{0, 1\\\\}$. The input of the metric $\\\\mathcal{E}$ depends on the choice. For example, AUC is calculated based on the anomaly score while F1 needs the binary prediction. A threshold is often required in UAD research to turn the anomaly score into a binary prediction. The calculation of the F1 score is consistent with previous research [5, 6].\\n\\n[5] Shenkar and Wolf. Anomaly detection for tabular data with internal contrastive learning. ICLR 2022.\\n\\n[6] Qiu et al. Neural transformation learning for deep anomaly detection beyond images. ICML 2021.\\n\\n**Q4:** This is an insightful question. You are right. Consider the unsupervised outlier detection on the contaminated dataset, i.e. $N_1>0$ in our Definition 1. A good model should assign higher anomaly scores to those anomalies. When $N_1 = 0$, some data point always exists around the decision boundary, e.g. support vectors in OCSVM and should have higher anomaly scores. Whether a good UAD model recognizes these samples as anomalies or normal data requires further assumption or prior knowledge. In our statement, we mean a good UAD model should assign relatively high anomaly scores to these samples. \\n\\nWe find the current word is controversial. We revise it as \\\"... They should be assigned relatively high anomaly score by a good UAD model.\\\" \\n\\n**Q5:** The intuition of using median(s) instead of mean(s) is that mean is more likely biased to the large values in {$\\\\{s _i | s _i < s _{\\\\text{top}\\\\tau}\\\\}$}. During our preliminary exploration, using relative top-$\\\\tau\\\\%$ mean minus mean often underperforms compared with RTM. Using the median in both cases may lose the information of large anomaly scores and make the gap small. In one word, we hope the statistical value of {$\\\\{s _i | s _i < s _{\\\\text{top}\\\\tau}\\\\}$} reasonably small and that of {$\\\\{s _i | s _i \\\\geq s _{\\\\text{top}\\\\tau}\\\\}$} large.\\n\\n**Q6:** Yes. It can be interpreted as an error percentile. We have taken your suggestion in F5. Thank you so much.\\n\\n**Q7:** Consider anomaly scores inferred on $N$ training sample in the following form $\\\\{1, 2, 3, ..., N\\\\}$. In this case, RTM will give a large value. However, this case does not satisfy Assumption 1. To solve this problem, we then propose the EAG metric which utilizes the variance of {$\\\\{s _i | s _i < s _{\\\\text{top}\\\\tau}\\\\}$}.\"}", "{\"comment\": \"Dear Reviewer 7zTy,\\n\\nWhile the author-reviewer discussion period is going to end, we haven't received any feedback from you on our responses. It would be highly appreciated if you provide feedback to our explanation and revision regarding our work.\\n\\nSincerely, \\n\\nAuthors\"}" ] }
Equ277PBN0
Privacy-Preserving Personalized Federated Prompt Learning for Multimodal Large Language Models
[ "Linh Tran", "Wei Sun", "Stacy Patterson", "Ana Milanova" ]
Multimodal Large Language Models (LLMs) are pivotal in revolutionizing customer support and operations by integrating multiple modalities such as text, images, and audio. Federated Prompt Learning (FPL) is a recently proposed approach that combines pre-trained multimodal LLMs such as vision-language models with federated learning to create personalized, privacy-preserving AI systems. However, balancing the competing goals of personalization, generalization, and privacy remains a significant challenge. Over-personalization can lead to overfitting, reducing generalizability, while stringent privacy measures, such as differential privacy, can hinder both personalization and generalization. In this paper, we propose a Differentially Private Federated Prompt Learning (DP-FPL) approach to tackle this challenge by leveraging a low-rank factorization scheme to capture generalization while maintaining a residual term that preserves expressiveness for personalization. To ensure privacy, we introduce a novel method where we apply local differential privacy to the two low-rank components of the local prompt, and global differential privacy to the global prompt. Our approach mitigates the impact of privacy noise on the model performance while balancing the tradeoff between personalization and generalization. Extensive experiments demonstrate the effectiveness of our approach over other benchmarks.
[ "Multimodal Large Language Model", "Federated Prompt Learning", "Personalization", "Differential Privacy" ]
Accept (Poster)
https://openreview.net/pdf?id=Equ277PBN0
https://openreview.net/forum?id=Equ277PBN0
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xn1eNXAoAp", "wdiMLS72Tm", "wLfqJMroaF", "vW4dwQ9LgZ", "uo8C9Sw0rr", "slKdscjssR", "rmhBbPKZ1H", "rXNbCkpty8", "oOUvHtM5Y2", "gj0zz8rLhe", "eQeqpWTQk6", "dh3mYF2zHp", "dH5UohrrF9", "ZbJjcSeXrw", "VoLRXH2TmW", "RGNHgNNFY6", "QvOKl6ammX", "PkEpDDzgQx", "Nf2hud4wxX", "LSQRjoXqbi", "KantdTD9dR", "JlFnRNWxqd", "CbQX9JtlQl", "AUtsGOXXbz", "6WZ166NITt", "5r9S2soSyA", "4DD7jV2fk7" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732943158097, 1734639702560, 1732427951688, 1732943011039, 1732498358799, 1732428204175, 1732991465625, 1732943048911, 1730531397780, 1732428084960, 1732428055122, 1732430122770, 1732943080895, 1732944041243, 1730689164071, 1730603133979, 1732660477624, 1732590979575, 1732942958576, 1732427856757, 1732428255279, 1737524192516, 1732428005522, 1732677101736, 1732427821064, 1732428161240, 1730707422918 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12452/Authors" ], [ "ICLR.cc/2025/Conference/Submission12452/Area_Chair_bXPh" ], [ "ICLR.cc/2025/Conference/Submission12452/Authors" ], [ "ICLR.cc/2025/Conference/Submission12452/Authors" ], [ "ICLR.cc/2025/Conference/Submission12452/Reviewer_Xnij" ], [ "ICLR.cc/2025/Conference/Submission12452/Authors" ], [ "ICLR.cc/2025/Conference/Submission12452/Authors" ], [ "ICLR.cc/2025/Conference/Submission12452/Authors" ], [ "ICLR.cc/2025/Conference/Submission12452/Reviewer_BtXp" ], [ "ICLR.cc/2025/Conference/Submission12452/Authors" ], [ "ICLR.cc/2025/Conference/Submission12452/Authors" ], [ "ICLR.cc/2025/Conference/Submission12452/Authors" ], [ "ICLR.cc/2025/Conference/Submission12452/Authors" ], [ "ICLR.cc/2025/Conference/Submission12452/Reviewer_Xnij" ], [ "ICLR.cc/2025/Conference/Submission12452/Reviewer_Xnij" ], [ "ICLR.cc/2025/Conference/Submission12452/Reviewer_U7Cw" ], [ "ICLR.cc/2025/Conference/Submission12452/Reviewer_BrD2" ], [ "ICLR.cc/2025/Conference/Submission12452/Reviewer_U7Cw" ], [ "ICLR.cc/2025/Conference/Submission12452/Authors" ], [ "ICLR.cc/2025/Conference/Submission12452/Authors" ], [ "ICLR.cc/2025/Conference/Submission12452/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12452/Authors" ], [ "ICLR.cc/2025/Conference/Submission12452/Reviewer_BtXp" ], [ "ICLR.cc/2025/Conference/Submission12452/Authors" ], [ "ICLR.cc/2025/Conference/Submission12452/Authors" ], [ "ICLR.cc/2025/Conference/Submission12452/Reviewer_BrD2" ] ], "structured_content_str": [ "{\"comment\": [\"We thank the reviewers for their helpful comments and suggestions. We are glad to see that the reviewers appreciate the following aspects of our work:\", \"Importance of the problem (BrD2, Xnij)\", \"Strong motivation of the problem (BrD2)\", \"Innovation of our proposed method (U7Cw)\", \"Extensive theoretical DP analysis (Xnij, BtXp)\", \"Strong experimental results supporting our claims (BrD2, U7Cw, BtXp)\", \"We summarize the updates we made on our manuscript to incorporate the reviewers' feedback and suggestions:\", \"Additional experiments in different complex settings, including larger datasets, different model, different data distribution, more number of clients and stricter privacy constraints (BrD2, Xnij, U7Cw).\", \"Extended ablation study on the DP noise level, factorization rank value and the residual term to further demonstrate their effect on the tradeoff between personalization, generalization and privacy (BrD2, Xnij, U7Cw).\", \"A detailed discussion about the benefit of the residual term in our method, which is supported by the extensive ablation study (BrD2, BtXp).\", \"Addition of baselines that are relevant to our work for comparison. The results show that our method is more effective and robust against privacy constraints (Xnij, BtXp).\", \"Evaluation of our proposed method against membership inference attack. The results show that our approach is effective in defending the training data with less than 10\\\\% reduction in the target model accuracy, balancing the utility-privacy tradeoff (Xnij).\", \"We note that all of the updates have been reflected and highlighted in blue in the revised paper. We thank the reviewers again for taking the time to review our paper and providing constructive feedback that has helped strengthen our submission.\"]}", "{\"metareview\": \"The paper explores Differentially Private Federated Prompt Learning (DP-FPL) for multimodal large language models, examining it from the perspectives of personalization, generalization, and differential privacy. The authors provide a largely straightforward theoretical analysis and present experimental results. However, they do not offer a theoretical rationale for the newly introduced residual term, even though its effectiveness is supported by an ablation study.\", \"additional_comments_on_reviewer_discussion\": \"There were relatively few experimental baselines, a concern partially addressed during the discussion phase. Initially, many terms and assertions were undefined or uncited, which the authors addressed in their rebuttal\\u2014for instance, the citation for LDP was only added in the revised manuscript, causing some confusion about the paper\\u2019s novelty. The authors also included an ablation study to further demonstrate the effectiveness of the new residual term in the objective.\"}", "{\"comment\": \"Thank you for your thoughtful review and detailed feedback. Below we address each question and concern raised.\\n\\n*1. Although I appreciate the simplicity of the proposed method, compared with existing methods, it seems that only the residuals and differential privacy are added, which are commonly used strategies, making the novelty insufficient.*\\n-----\\nWe appreciate the reviewer\\u2019s recognition of the simplicity of our method. While it is true that our approach incorporates residual connections and differential privacy\\u2014techniques that are individually well-established\\u2014the novelty of our work emerges from the innovative combination of these strategies to effectively balance personalization, generalization, and privacy.\\n\\n1. **Integration of Personalization, Generalization, and Privacy:**\\n - **Differential Privacy (DP):** DP provides robust data protection but can compromise model performance due to the added noise. To mitigate this, we employ matrix factorization to reduce the impact of noise on the model's efficacy. This approach builds on the foundation laid by Yu et al. (2021), who demonstrated the effectiveness of factorization in noise mitigation but did not address the interplay between personalization and generalization.\\n - **Factorization Enhancement:** Unlike FedPGP, our method modifies the factorization process to occur in every training round rather than only at the beginning. This continuous update ensures a dynamic adjustment that better accommodates the evolving model parameters, enhancing both personalization and generalization over time.\\n\\n2. **Residual Connections for Improved Local Learning:**\\n - We incorporate a residual term to promote local learning capabilities. This addition specifically addresses the limitation observed in Cui et al. (2024), where factorization improved generalization but potentially reduced local learning performance. By integrating residual connections, our method effectively balances the benefits of factorization with enhanced local adaptation, ensuring that personalization is not sacrificed for generalization.\\n\\n3. **Innovative Differential Privacy Mechanism:**\\n - Our approach introduces a hybrid differential privacy mechanism that utilizes both global and local differential privacy. Unlike conventional methods that apply noise uniformly to the entire prompt, our dual approach strategically distributes noise addition. This technique not only preserves the overall privacy guarantees but also minimizes the adverse effects of noise on model performance, achieving a more refined balance between privacy and utility.\\n\\nIn summary, our method distinguishes itself by **combining residual connections and differential privacy in a novel framework** that simultaneously enhances personalization, generalization, and privacy. This integrated approach addresses the shortcomings of existing methods and offers a more balanced and effective solution for protecting prompt data without compromising model performance. We believe these contributions substantively advance the current state of the art.\\n\\nDa Yu, Huishuai Zhang, Wei Chen, Jian Yin, and Tie-Yan Liu. Large scale private learning via low-rank reparametrization. In *International Conference on Machine Learning*, pp. 12208\\u201312218. PMLR, 2021.\\n\\nTianyu Cui, Hongxia Li, Jingya Wang, and Ye Shi. Harmonizing generalization and personalization in federated prompt learning. In *Forty-first International Conference on Machine Learning*, 2024. URL https://openreview.net/forum?id=YYwERRXsJW.\"}", "{\"comment\": \"We thank the reviewer for engaging in the rebuttal and raising their score. To address your concern about the privacy-preserving performance of our method, we have conducted further experiment with Membership Inference Attack (MIA) for three datasets: Caltech101, Oxford Pets and Oxford Flowers. The detailed implementation is described in the appendix of the revised paper along with the experimental results and discussion. We summarize our findings as follows.\\n\\nFigure 3 (<https://ibb.co/1vh2Q25>) describes the target model accuracy on local classes (a) and neighbor classes (b), and the success rate of the MIA (c) for all three datasets.\\nWe observe in Figure (c) that the success rate is low (less than the random guessing baseline 50\\\\%) when $\\\\epsilon = 0.1$ for all datasets. In addition, $\\\\epsilon = 0.1$ causes less than 10\\\\% reduction in the target model accuracy for both local and neighbor classes as shown in Figures (a) and (b). This shows that our approach effectively protects the training data from MIA while still maintaining good model performance, balancing the utility-privacy tradeoff.\\n\\nWe thank the reviewer for your useful comments and suggestions. We would appreciate any further feedback you can provide to help us improve our submission, and thank you for your time.\"}", "{\"comment\": \"Thanks to the authors for the detailed response, most of my questions were well addressed and I increased the score accordingly. However, I am still concerned about the gap between theoretical privacy guarantee and actual privacy performance. The paper cited by the authors is also not peer-reviewed. It would be better if this part of the experiment could be supplemented.\"}", "{\"comment\": \"*4. The model shows a performance trade-off between privacy, personalization, and generalization, yet this is not adequately explored in real-world applications. While the authors apply privacy noise at various levels, the impact on performance for real-world, heterogeneous data remains unclear. An experiment demonstrating this trade-off would strengthen the paper.*\\n-----\\nAs mentioned in our response to point 3, we conducted further experiments on a real-world non-IID data distribution using a Dirichlet data split. We added detailed discussion and analysis about the trade-off between accuracy and privacy for the new experiments in **Section 4.2**.\\n\\n*5. The paper experiments with moderate privacy levels $(\\\\epsilon=0.1,0.2,0.4)$, but it does not examine stricter levels $(\\\\epsilon<0.1\\\\epsilon<0.1\\\\epsilon<0.1)$ that may be necessary for sensitive applications like healthcare or finance. Including results for $\\\\epsilon=0.05$ and $\\\\epsilon=0.01$ would provide a fuller understanding of the model\\u2019s robustness under high privacy demands.*\\n-----\\nAs mentioned in our response to point 2, we have conducted more experiments with stricter noise level $\\\\epsilon = \\\\{ 0.01, 0.05 \\\\}$. The results (Tables 2 to 4) show that our approach is generally more robust than other baselines even under high privacy levels.\\n\\n*6. The dataset heterogeneity is simulated by randomly assigning class labels to clients, which does not accurately capture real-world, non-IID data distributions. Employing a Dirichlet distribution to split the dataset would more effectively simulate realistic data diversity, enhancing the model's evaluation under practical federated learning conditions.*\\n-----\\nAs mentioned in our responses to points 3 and 4, we have conducted further experiments using Dirichlet data split per your request, and we reported the results in Table 4.\\n\\n*7. Although the paper notes that DP-FPL degrades gradually as DP noise increases, a clearer breakdown of this effect on both personalization and generalization\\u2014specifically for local and neighbor classes\\u2014across different datasets would add clarity to the trade-offs involved.*\\n-----\\nWe have added more in depth ablation study on the effect of the noise level as well as the rank and residual term to further study the trade-off between privacy and accuracy (local and neighbor classes). We reported the ablation experiments for Caltech101 dataset in Figure 2 in the main paper, and included the results for other datasets in the appendix. To summarize, higher DP noise generally degrades both local and neighbor classes accuracy for most datasets. However in some special cases, certain noise ranges can improve generalization for specific datasets by preventing overfitting to local classes. This behavior is highly dependent on the data sensitivity of the dataset. In addition, if the noise is too large, the overall utility will degrade with no improvement in generalization capability.\\n\\n*8. Privacy noise appears to improve generalization by reducing overfitting; however, Table 2 shows mixed effects across datasets and noise levels. For instance, generalization to neighbor classes improves with increased privacy noise on Caltech101 but is inconsistent on Oxford Flowers. Analyzing this inconsistency could provide insights into the regularization benefits of DP noise.*\\n-----\\nThis is an excellent observation, we thank the reviewer for pointing this out. We note that it is expected that the accuracy (for both local and neighbor classes) will degrade as we increase the privacy level. However, there is an atypical behavior for Caltech101 dataset where higher privacy noise improves generalization. We hypothesize that this is because privacy noise act as a form of regularization, and this behavior may be distinct for every dataset due to the difference in data sensitivity. Nevertheless, if the noise level is large enough, the overall utility will degrade and we will not have the benefit of generalization. To further demonstrate our conjecture, we ran additional experiments for Caltech101 with higher noise level $\\\\epsilon = \\\\{ 0.01, 0.05 \\\\}$. The results (Table 3 in revised paper) show that the neighbor accuracy no longer improves under too strict privacy constraints. We have updated the discussion to illustrate this point in **Section 4.2**.\"}", "{\"comment\": \"We thank the reviewer for your prompt response and for raising the score. We are glad that we have addressed all of your questions with the addition of the ablation study, membership inference attack, more baseline comparison and complexity analysis. We appreciate your helpful feedback and suggestions which have helped significantly strengthen our paper, and thank you for your time.\"}", "{\"comment\": \"Thank you for your feedback and engagement with our work. We appreciate your recognition of the extended experiments we have conducted on more complex settings, including strict privacy constraints and real-world non-IID data distribution. The additional empirical results have provided more insight into the effectiveness of our method, especially on the relationship between generalization and privacy. We appreciate your time and feedback which has helped improve the overall quality of our paper.\"}", "{\"summary\": \"This paper introduces a Differentially Private Federated Prompt Learning (DP-FPL) approach to achieve a performance balance between personalization and generalization in FL setting. Specifically, compared to previous work on this subject, DP-FPL adds a residual term to the low-rank decomposition that achieves better balance between privacy and effectiveness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is in general well-written and easy to follow.\\n2. The effectiveness of proposed framework is testified by comparing with existing works, and improvements are significant.\\n3. DP theoretical analysis is performed.\", \"weaknesses\": \"1. Since the introduction of DP part is quite standard for privacy preservation, the novelty mostly lies in the introduction of the residual term compared to FedPGP.\\n\\n2. Although the standard theoretical analysis on DP is provided, it didn't address the impact of the introduction of the residual term. The paper will be enhanced significantly if it can provide some theoretical insights on the impact of the residual on the utility-tradeoff, which will support its experimental results.\\n\\n3. The experimental baselines are a bit limited. Only two methods are compared, while many federated prompt learning methods have been proposed in the past\\uff0csuch as PromptFL,pFedPrompt,FedOTP etc.\", \"questions\": \"If I understand correctly, FedPGP in Table 2 in the experiment section applies the original FedPGP with DP, therefore is the same as \\\"without residual\\\" in Table 3. Correct? In other words, are the result difference between FedPGP and DP-FPL in Table 2 due to the residual term only ?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"*7. Lack of complexity discussion.*\\n-----\\nLow-rank decomposition requires heavy computational runtime if we use SVD. However, in our method we use the power method (one iteration) for the decomposition process, which significantly reduces the computational cost. Given the original full-rank matrix of size $m \\\\times n$ (assuming $m \\\\leq n$), the computational cost of SVD scales with $\\\\mathcal{O}(m^2 n)$, while the power iteration only scales with $\\\\mathcal{O}(kmn)$ where $k$ is the reduced rank and $k \\\\ll m$. We have added the complexity discussion in **Section 3.4**.\\n\\n*8. Minor: What is $\\\\epsilon$ in Table 4?*\\n-----\\nTable 4 (Table 6 in revised version) shows the performance of our method without any differential privacy noise (non-private setting), so no $\\\\epsilon$ was used in this experiment. We have updated the caption to make this clear.\"}", "{\"comment\": \"*6. There are a large number of assertions that have not been confirmed and are all the author's personal conjectures. For example, \\\"simply applying LoRA can result in loss of expressiveness during training, ...\\\", \\\" this approach does not protect the privacy of the prompt data..., Therefore, privacy noise must be incorporated into the gradient updates during each training step for privacy guarantee\\\", \\\"the impact of the GDP noise on the model utility is much smaller compared to LDP\\\".*\\n-----\\nWe thank the reviewer for highlighting these concerns. We apologize for the lack of sufficient explanations and evidence supporting our claims in the initial submission. Below, we address each point in detail and have incorporated the necessary revisions into the updated papers:\\n\\n1. *\\\"simply applying LoRA can result in loss of expressiveness during training, ...\\\"*\\n\\n Prior research has demonstrated that LoRA often struggles to match the performance of full fine-tuning on several challenging tasks (Liu et al., 2024; Biderman et al., 2024; Ivison et al., 2023; Zhuo et al., 2024). This limitation stems from the fact that LoRA, along with other low-rank decomposition methods, constrains the parameter space by discarding some information inherent in the original full-rank space (Konecny, 2016). We have updated this statement in the manuscript to reflect these findings (see **Section 3.2**).\\n\\n2. *\\\" this approach does not protect the privacy of the prompt data..., Therefore, privacy noise must be incorporated into the gradient updates during each training step for privacy guarantee\\\"*\\n\\n We recognize that our previous assertion was overly definitive. To clarify, adding noise at the final stage of training does offer a degree of data protection. However, introducing noise incrementally throughout the training process provides better control over its impact on the model, thereby enhancing utility (Abadi et al., 2016). Additionally, Wu et al. (2024) indicates that a substantially larger privacy budget (e.g., $\\\\epsilon = 20$) is necessary to effectively safeguard data against membership inference attacks when noise is only added at the final step, which negatively affects model utility. Consequently, incorporating noise during each training step is a more widely adopted and preferable method. We have revised our manuscript to include this clarification and supporting evidence (see **Section 3.3**).\\n\\n3. *\\\"the impact of the GDP noise on the model utility is much smaller compared to LDP\\\"*\\n\\n In federated learning, GDP noise is applied to the aggregated gradient, while LDP noise is applied more frequently to each client to achieve individual data privacy. Due to this, LDP often requires more noise to provide the same privacy protection as GDP, resulting in more severe degradation to the model accuracy (Arachchige et al., 2019). This statement is updated in **Section 3.3**.\\n\\nShih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, and Min-Hung Chen. Dora: Weight-decomposed low-rank adaptation. *arXiv preprint arXiv:2402.09353*, 2024.\\n\\nDan Biderman, Jacob Portes, Jose Javier Gonzalez Ortiz, Mansheej Paul, Philip Greengard, Connor Jennings, Daniel King, Sam Havens, Vitaliy Chiley, Jonathan Frankle, Cody Blakeney, and John Patrick Cunningham. LoRA learns less and forgets less. *Transactions on Machine Learning Research*, 2024. ISSN 2835-8856. URL https://openreview.net/forum?id=aloEru2qCG. Featured Certification.\\n\\nHamish Ivison, Yizhong Wang, Valentina Pyatkin, Nathan Lambert, Matthew Peters, Pradeep Dasigi, Joel Jang, David Wadden, Noah A Smith, Iz Beltagy, et al. Camels in a changing climate: Enhancing lm adaptation with tulu 2. *arXiv preprint arXiv:2311.10702*, 2023.\\n\\nTerry Yue Zhuo, Armel Zebaze, Nitchakarn Suppattarachai, Leandro von Werra, Harm de Vries, Qian Liu, and Niklas Muennighoff. Astraios: Parameter-efficient instruction tuning code large language models. *arXiv preprint arXiv:2401.00788*, 2024.\\n\\nJakub Konecny. Federated learning: Strategies for improving communication efficiency. *arXiv preprint arXiv:1610.05492*, 2016.\\n\\nMartin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In *Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security*, CCS \\u201916, pp. 308\\u2013318. Association for Computing Machinery, 2016.\\n\\nYixin Wu, Rui Wen, Michael Backes, Pascal Berrang, Mathias Humbert, Yun Shen, and Yang Zhang. Quantifying Privacy Risks of Prompts in Visual Prompt Learning. In *USENIX Security Symposium (USENIX Security)*. USENIX, 2024.\\n\\nPathum Chamikara Mahawaga Arachchige, Peter Bertok, Ibrahim Khalil, Dongxi Liu, Seyit Camtepe, and Mohammed Atiquzzaman. Local differential privacy for deep learning. *IEEE Internet of Things Journal*, 7(7):5827\\u20135842, 2019.\"}", "{\"comment\": \"We thank the reviewers for your time and effort in reviewing our paper. We appreciate your thoughtful feedback and suggestions, and we carefully address each comment and question in our response. All updates and changes are incorporated and highlighted in blue in our revised paper.\"}", "{\"comment\": [\"We thank the reviewer for acknowledging our effort in improving our work. During this rebuttal phase, we have:\", \"Differentiated our work from others and highlighted the significance and innovation of our method.\", \"Provided extensive ablation study and detailed discussion on the benefit of the residual term.\", \"Added more baselines that are directly applicable to our proposed framework.\", \"We would like to ask if there are any outstanding issues with our paper that are limiting you from considering our paper as acceptable. We appreciate any further feedback you can provide to help us improve our submission, and thank you for your time.\"]}", "{\"comment\": \"I thank the authors for the additional experiments and I have no further questions. I encourage their inclusion in the main text. I am willing to raise the score again.\"}", "{\"summary\": \"This article focuses on the issues of balancing personalization, generalization, and privacy in Federated Prompt Learning, and proposes a Differentially Private Federated Prompt Learning method to solve them, and finally proves the effectiveness of the proposed method in three datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. It is very important and meaningful to consider privacy in Federated Prompt Learning.\\n2. The proposed method is simple and easy to deploy.\\n3. Existing methods are discussed and compared in detail.\\n3. The privacy guarantee of the proposed method is given.\", \"weaknesses\": \"1. Although I appreciate the simplicity of the proposed method, compared with existing methods, it seems that only the residuals and differential privacy are added, which are commonly used strategies, making the novelty insufficient.\\n2. Inadequate experiments. First, the baselines are limited, and it is unclear why a range of baseline methods from related work was not used. Secondly, it is unclear about the privacy-preserving performance of the proposed method, such as its performance in the face of membership inference attacks. Furthermore, no significance tests were performed. Finally, the proposed method has a large number of components, such as global prompt, local prompt, GDP, LDP, etc., all of which require ablation experiments.\\n3. There are a large number of assertions that have not been confirmed and are all the author's personal conjectures. For example, \\\"simply applying LoRA can result in loss of expressiveness during training, ...\\\", \\\" this approach does not protect the privacy of the prompt data..., Therefore, privacy noise must be incorporated into the gradient updates during each training step for privacy guarantee\\\", \\\"the impact of the GDP noise on the model utility is much smaller compared to LDP\\\".\\n4. Lack of complexity discussion.\\n5. Minor: What is $\\\\epsilon$ in Table 4?\", \"questions\": \"Please refer to Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a novel approach, Differentially Private Federated Prompt Learning (DP-FPL), which aims to balance privacy, personalization, and generalization in multimodal large language models (LLMs) within federated learning frameworks. DP-FPL utilizes a low-rank adaptation (LoRA) technique with differential privacy (DP) to enable prompt tuning at both global and local levels, allowing clients to personalize prompts without directly sharing sensitive data. This method integrates both local and global differential privacy mechanisms, selectively adding noise to low-rank components to preserve model performance while ensuring data privacy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper combines local and global differential privacy with federated prompt learning, creating a robust privacy-preserving framework suitable for multimodal LLMs.\", \"By using low-rank adaptation with a residual component, DP-FPL achieves a balance between generalization for broader applicability and personalization for client-specific data.\", \"The experimental design demonstrates DP-FPL\\u2019s effectiveness across multiple datasets and privacy noise levels, providing evidence of the method\\u2019s performance under varying conditions.\"], \"weaknesses\": [\"The evaluation lacks statistical tests to confirm the significance of performance differences, which limits the interpretation of the reported improvements.\", \"The exploration of privacy-performance trade-offs is insufficient under practical, real-world conditions. While the paper discusses differential privacy noise\\u2019s impact, it does not fully assess performance under stricter privacy constraints.\", \"The simulation of data heterogeneity is limited; the paper relies on randomly splitting class labels, which does not capture the complexity of real-world non-IID data distributions effectively.\"], \"questions\": [\"The model shows a performance trade-off between privacy, personalization, and generalization, yet this is not adequately explored in real-world applications. While the authors apply privacy noise at various levels, the impact on performance for real-world, heterogeneous data remains unclear. An experiment demonstrating this trade-off would strengthen the paper.\", \"The paper experiments with moderate privacy levels (\\u03f5=0.1,0.2,0.4), but it does not examine stricter levels (\\u03f5<0.1\\u03f5 < 0.1\\u03f5<0.1) that may be necessary for sensitive applications like healthcare or finance. Including results for \\u03f5=0.05 and \\u03f5=0.01 would provide a fuller understanding of the model\\u2019s robustness under high privacy demands.\", \"The dataset heterogeneity is simulated by randomly assigning class labels to clients, which does not accurately capture real-world, non-IID data distributions. Employing a Dirichlet distribution to split the dataset would more effectively simulate realistic data diversity, enhancing the model's evaluation under practical federated learning conditions.\", \"Although the paper notes that DP-FPL degrades gradually as DP noise increases, a clearer breakdown of this effect on both personalization and generalization\\u2014specifically for local and neighbor classes\\u2014across different datasets would add clarity to the trade-offs involved.\", \"Privacy noise appears to improve generalization by reducing overfitting; however, Table 2 shows mixed effects across datasets and noise levels. For instance, generalization to neighbor classes improves with increased privacy noise on Caltech101 but is inconsistent on Oxford Flowers. Analyzing this inconsistency could provide insights into the regularization benefits of DP noise.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Post-rebuttal Comments\", \"comment\": \"Thanks for the authors' feedback. The new results and discussions address most of my concerns. I raise my rate for respect.\"}", "{\"comment\": \"Thanks for the authors' responses. The authors have addressed my concerns and I will maintain my rating of acceptance for the paper.\"}", "{\"comment\": \"We thank the reviewer for recognizing our effort and raising their score. We are glad that we have addressed your concerns with the new experiments on more complex settings, including larger datasets, different model, more number of clients and extended ablation study. The new empirical results have demonstrated the applicability of our method in more diverse settings.\\n\\nWe also appreciate your recognition of our extended discussion on the benefit of the residual term and the privacy bound of the global gradient. We thank the reviewer again for your constructive feedback and suggestions that have helped enhance the quality of our paper.\"}", "{\"comment\": \"*4. Are there more results on more clients, large datasets, and other models beside ViT-B16?*\\n-----\\nAs mentioned in our response to point 1, we conducted additional experiments on two large datasets Food101 and CIFAR-100, using Vit-B16 as the backbone model for Food101 and ResNet50 for CIFAR-100. We have also increased the number of clients to $25$ and $50$ for CIFAR-100. All new experiments show similar and consistent results with the existing empirical setup. The new results are reported in **Section 4.2**.\\n\\n*5. Any theoretical justification of why the residual term preserves the expressiveness of the local prompt?*\\n-----\\nWe refer the answer to our response to point 2 above. We have added the discussion of the residual term in **Section 3.3**. Our hypothesis is supported by the ablation experiment results in **Section 4.3**. We leave further theoretical analysis of the residual term as a potential future work direction.\"}", "{\"comment\": \"Thank you for your detailed feedback and suggestions. We respond to your comments as follows.\\n\\n*1. Since the introduction of DP part is quite standard for privacy preservation, the novelty mostly lies in the introduction of the residual term compared to FedPGP.*\\n-----\\nWe thank the reviewer for recognizing the significance of our work. While our approach is similar to FedPGP, our low-rank factorization process is inherently different. In FedPGP, the learnable local prompt is **factorized at the beginning** and is kept as low-rank during the entire training process, which reduces the local learning capability due to the lost of expressiveness, as shown in our experimental results. We tackle this issue by **factorizing every training round**, and also incorporating the residual term to retain the error of the factorization process. Furthermore, we introduce an unconventional way of introducing DP noise using both global and local differential privacy to protect the prompt data, unlike a vanilla method that directly adds noise to the whole prompt before publishing it. Our privacy mechanism mitigates the effect of DP noise on model performance while still maintaining the same level of privacy guarantee. In the revised paper, we have emphasized the key differences to highlight our contribution in **Sections 3.2 and 3.3**.\\n\\n*2. Although the standard theoretical analysis on DP is provided, it didn't address the impact of the introduction of the residual term. The paper will be enhanced significantly if it can provide some theoretical insights on the impact of the residual on the utility-tradeoff, which will support its experimental results.*\\n-----\\nWe thank the reviewer for the thoughtful suggestion, which we address it as follows. Our approach uses low-rank decomposition and DP, and both of these building blocks introduce error to the training process. This error acts as a regularization term that prevents clients from overfitting to local data, reducing personalization and improving generalization. However, under stricter condition (lower rank and higher DP noise), the accumulated error may become too large and destroy the personalization capability. In this case, the added residual term compensates for the regularization-like error and helps improve local learning, balancing personalization and generalization. We have added the discussion of the residual term in the end of **Section 3.3**. In addition, we further experimented with the impact of the residual via the ablation study in **Section 4.3**, and the results support our hypothesis. We leave further theoretical analysis of the residual term as a potential future work direction.\\n\\n*3. The experimental baselines are a bit limited. Only two methods are compared, while many federated prompt learning methods have been proposed in the past, such as PromptFL, pFedPrompt, FedOTP etc.*\\n-----\\nWe appreciate the reviewer\\u2019s feedback regarding the selection of baseline methods. To address the concern of limited baselines, we have expanded our experimental evaluation to include additional baseline methods, specifically **PromptFL** (Guo et al., 2023b) and **FedOTP** (Li et al., 2024). The results of these extended experiments are detailed in Tables 2 to 4, demonstrating the robustness and comparative performance of our proposed method against a broader range of established techniques. Note: To prevent redundancy and streamline our comparisons, we have removed the baseline, FULL, from the updated paper since FedOTP which is also trained with full-rank global and local prompts is a more recent baseline.\\n\\nRegarding other related works in federated prompt learning, many require modifications to the backbone model, which is not relevant to our approach as we want to protect the personalized prompt, not the personalized model. Our method does not change the backbone model, as outlined in Table 1. As a result, we only consider FedPGP, PromptFL and FedOTP which are directly applicable baselines to our proposed framework.\\n\\n*4. If I understand correctly, FedPGP in Table 2 in the experiment section applies the original FedPGP with DP, therefore is the same as \\\"without residual\\\" in Table 3. Correct? In other words, are the result difference between FedPGP and DP-FPL in Table 2 due to the residual term only ?*\\n-----\\nWe want to clarify that FedPGP and our method without residual in Table 3 (Table 7 in the revised paper) are different. As explained in our response to point 1, in FedPGP, the learnable local prompt is kept as low-rank during the entire training process, while our method without residual performs the factorization process every training round and excludes the residual term. We have updated the description to make this clear in **Section 4.3**.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"*2. Inadequate experiments. First, the baselines are limited, and it is unclear why a range of baseline methods from related work was not used.*\\n-----\\nWe appreciate the reviewer\\u2019s feedback regarding the selection of baseline methods. Our primary baseline, FedPGP, was chosen because it closely aligns with our methodology through its use of low-rank decomposition, and it is considered state of art for addressing personalization and generalization, making it a highly relevant point of comparison. \\n\\n\\nTo address the concern of limited baselines, we have expanded our experimental evaluation to include additional baseline methods, specifically **PromptFL** (Guo et al., 2023b) and **FedOTP** (Li et al., 2024). The results of these extended experiments are detailed in Tables 2 to 4, demonstrating the robustness and comparative performance of our proposed method against a broader range of established techniques. Note: To prevent redundancy and streamline our comparisons, we have removed the baseline, FULL, from the updated paper since FedOTP which is also trained with full-rank global and local prompts is a more recent baseline.\\n\\nRegarding other related works in federated prompt learning, many require modifications to the backbone model, which is not relevant to our approach as we want to protect the personalized prompt, not the personalized model. Our method does not change the backbone model, as outlined in Table 1. As a result, we only consider FedPGP, PromptFL and FedOTP which are directly applicable baselines to our proposed framework.\\n\\nTao Guo, Song Guo, Junxiao Wang, Xueyang Tang, and Wenchao Xu. Promptfl: Let federated participants cooperatively learn prompts instead of models-federated learning in age of foundation model. *IEEE Transactions on Mobile Computing*, 2023b.\\n\\nHongxia Li, Wei Huang, Jingya Wang, and Ye Shi. Global and local prompts cooperation via optimal transport for federated learning. *arXiv preprint arXiv:2403.00041*, 2024.\\n\\n*3. Secondly, it is unclear about the privacy-preserving performance of the proposed method, such as its performance in the face of membership inference attacks.*\\n-----\\nThe reviewer raises a valid question. Previous study has shown that one can derive the bound of the success rate of membership inference attack (MIA) on an $(\\\\epsilon, \\\\delta)$-differentially private training algorithm (Thudi et al., 2022). The MIA accuracy is generally lower than random guess ($50\\\\%$) for appropriately chosen privacy parameters $\\\\epsilon, \\\\delta$. We added this statement in our privacy analysis in **Section 3.5**.\\n\\nAnvith Thudi, Ilia Shumailov, Franziska Boenisch, and Nicolas Papernot. Bounding membership inference. *arXiv preprint arXiv:2202.12232, 2022*.\\n\\n*4. Furthermore, no significance tests were performed.*\\n-----\\nIn the revised paper, we have significantly expanded our experimental evaluation to enhance the robustness and comprehensiveness of our findings. Specifically, we conducted a wide range of experiments across multiple small-scale and large-scale datasets under various complex settings, including different data distributions, models, and numbers of clients (**Section 4.2**). To deepen the understanding of our method, we performed more extensive ablation studies that investigate the influence of each key parameter (**Section 4.3**). Additionally, we have now included standard deviations alongside the mean test accuracies in our result tables to provide a clearer picture of performance variability. A thorough description of our experimental setup and ablation studies can be found in the appendix.\\n\\n*5. Finally, the proposed method has a large number of components, such as global prompt, local prompt, GDP, LDP, etc., all of which require ablation experiments.*\\n-----\\nWe want to clarify that the components global prompt and local prompt are the learnable parameters of the training algorithm, and the notion of GDP and LDP are the privacy guarantees we provide given the adversary model described in **Section 3.3**. These components are part of the training and privacy-preserving objectives, and hence cannot be changed. We studied the key parameters that directly affect the tradeoff between personalization, generalization and privacy: noise level $\\\\epsilon$, rank value and residual term. We identified which components are key parameters and have discussed more in depth the ablation experiment for each component in **Section 4.3**.\"}", "{\"comment\": \"I have read the revised paper and responses, which have partially addressed my concerns. Based on my overall assessment of this paper, I prefer to maintain my current rating.\"}", "{\"comment\": \"Thank you for taking the time to review our paper. Below we respond to the weaknesses and questions raised.\\n\\n*1. While the problems the paper tries to address are certainly important, the paper does not provide significant scientific insight nor rigorous analysis that can help solve them in broad situations. Instead, a specific set of existing techniques are shown to be effective in a narrow setting (i.e., small-scale datasets and one model architecture).*\\n-----\\nThank you for your valuable feedback. To address your concerns regarding the breadth and rigor of our analysis, we have performed several additional experiments and in-depth studies:\\n\\n1. **Expanded Large-Scale Dataset Evaluation**:\\n - **Food101**: We implemented our method using the ViT-B16 architecture as the backbone model.\\n - **CIFAR-100**: We employed the ResNet50 architecture to evaluate our approach.\\n \\n These large-scale datasets demonstrate that our method maintains its effectiveness beyond small-scale settings, showcasing its applicability across diverse and more complex data scenarios.\\n\\n2. **Comprehensive Ablation Studies**:\\n We conducted detailed ablation studies focusing on key parameters that directly affect the tradeoff between personalization, generalization and privacy, including:\\n - **Noise Level** for privacy protection\\n - **Rank** for low-rank factorization\\n - **Residual Term**\\n \\n These studies provide deeper insights into the influence of each parameter on our method's performance, highlighting the robustness and adaptability of our approach.\\n\\n3. **Consistent and Robust Results**:\\n The new experimental outcomes are consistent with our initial findings, further validating the effectiveness and reliability of our method across different architectures and larger datasets. The ablation experiment also shows the importance of the inclusion of the residual term in balancing personalization, generalization and privacy.\\n\\nAll the new empirical results and detailed analyses are presented in **Sections 4.2 and 4.3** of the revised manuscript. These additions not only enhance the scientific rigor of our work but also demonstrate its applicability in broader and more varied settings.\\n\\nWe believe that these comprehensive evaluations and analyses significantly strengthen our paper, providing the necessary scientific insights and rigorous validation to address the concerns raised.\\n\\n\\n*2. Though the introduction of the residual term in the local prompt decomposition is empirically demonstrated to be effective in the experimental settings, the paper does not give theoretical justification of why such design is beneficial and how applicable it is when applying to more complex situations (e.g., larger data heterogeneity).*\\n-----\\nOur approach uses low-rank decomposition to balance personalization and generalization and differential privacy (DP) to protect the sensitive prompt. Both of these building blocks introduce error to the training process. We hypothesize that this error acts as a regularization term that prevents clients from overfitting to local data, reducing personalization and improving generalization. However, under strictly private conditions (lower rank and higher DP noise), the accumulated error may become too large and potentially destroy the personalization capability. In this case, the added residual term compensates for the regularization-like error and helps improve local learning of the local prompt, balancing personalization and generalization. We have added the analysis in the end of **Section 3.3**, as well as additional empirical results for more complex settings as described previously in **Sections 4.2 and 4.3**.\\n\\n*3. The privacy analysis also lacks rigorous proof of why the global gradient aggregation satisfies the DP bound.*\\n-----\\nWe assume that the server and clients are honest, and the adversaries are public users with access to a fully trained customized prompt obtained from a FPL client, as described in the prompt as a service paradigm. Therefore, we care about the privacy guarantee of the published customized prompt. Nevertheless, because we apply global differential privacy (GDP) to the aggregated gradient with the privacy budget analyzed in Theorem 3.3, the distribution of the aggregated gradient to all clients satisfies $(\\\\epsilon, \\\\delta)$-GDP with the same privacy budget by the post-processing property of DP. We added this statement to the privacy analysis in **Section 3.5**.\"}", "{\"comment\": \"We thank the reviewer for the constructive suggestions. We address each point raised in the review below.\\n\\n*1. The evaluation lacks statistical tests to confirm the significance of performance differences, which limits the interpretation of the reported improvements.*\\n-----\\nWe have added standard errors alongside the mean test accuracies in our result tables to illustrate the variability and reliability of our experimental outcomes. To ensure the robustness of our findings, we conducted multiple runs of each experiment and calculated the average performance metrics. \\nDue to time constraints, we reported the average run only for Caltech101 in the revised paper. The standard deviations are minimal, indicating consistent performance across multiple runs. We expect the average results for other datasets are consistent as well. We will update the average run results for all other datasets in the final paper.\\n\\n*2. The exploration of privacy-performance trade-offs is insufficient under practical, real-world conditions. While the paper discusses differential privacy noise\\u2019s impact, it does not fully assess performance under stricter privacy constraints.*\\n-----\\nThank you for this valuable suggestion. In response, we have conducted additional experiments to evaluate the performance of our method under stricter privacy constraints by using higher noise levels $\\\\epsilon = \\\\{0.01, 0.05\\\\}$. The key findings from these experiments are as follows: at a stringent privacy level of $\\\\epsilon = 0.01$, certain datasets, such as Oxford Flowers, exhibited a noticeable decrease in test accuracy. Despite the increased noise, our method consistently outperforms existing baselines across most datasets. This demonstrates the robustness and effectiveness of our approach in maintaining superior performance even under stricter privacy constraints. The updated results are detailed in Tables 2 to 4 of the revised paper. \\n\\n*3. The simulation of data heterogeneity is limited; the paper relies on randomly splitting class labels, which does not capture the complexity of real-world non-IID data distributions effectively.*\\n-----\\nWe appreciate the reviewer\\u2019s valuable feedback regarding data heterogeneity. To more accurately represent real-world non-IID data distributions, we conducted additional experiments on the CIFAR-100 dataset using a Dirichlet distribution with parameter $\\\\alpha = 0.3$. This approach better captures the variability and complexity inherent in practical scenarios. The new results, detailed in Table 4 of the revised manuscript, are consistent with our original findings and further validate the effectiveness and robustness of our proposed method under more realistic heterogeneous conditions.\"}", "{\"summary\": \"This paper proposes a Differentially Private Federated Prompt Learning (DP-FPL) system for multimodal large language models to address the critical need of balancing the competing goals of personalization, generalization, and privacy-preserving that are involved in serving those models. By decoupling the globally learnable prompt from the personal learnable prompt, decomposing the local prompt with LoRA with an additional residual term, and employing differential privacy techniques in both local prompt learning and global prompt aggregation, DP-FPL achieves accuracy gains while meeting specific privacy budget in a distributed and heterogenous data setting.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The motivation of the paper is well stated, and writing is easily understood. The problem the paper tries to address is certainly important and practical in multimodal LLM deployment. The experiment setting is clear, and results support the claim the authors make.\", \"weaknesses\": \"While the problems the paper tries to address are certainly important, the paper does not provide significant scientific insight nor rigorous analysis that can help solve them in broad situations. Instead, a specific set of existing techniques are shown to be effective in a narrow setting (i.e., small-scale datasets and one model architecture). Though the introduction of the residual term in the local prompt decomposition is empirically demonstrated to be effective in the experimental settings, the paper does not give theoretical justification of why such design is beneficial and how applicable it is when applying to more complex situations (e.g., larger data heterogeneity). The privacy analysis also lacks rigorous proof of why the global gradient aggregation satisfies the DP bound.\", \"questions\": \"Are there more results on more clients, large datasets, and other models beside ViT-B16? Any theoretical justification of why the residual term preserves the expressiveness of the local prompt?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
EqcLAU6gyU
Agent-Oriented Planning in Multi-Agent Systems
[ "Ao Li", "Yuexiang Xie", "Songze Li", "Fugee Tsung", "Bolin Ding", "Yaliang Li" ]
Through the collaboration of multiple LLM-empowered agents possessing diverse expertise and tools, multi-agent systems achieve impressive progress in solving real-world problems. Given the user queries, the meta-agents, serving as the brain within multi-agent systems, are required to decompose the queries into multiple sub-tasks that can be allocated to suitable agents capable of solving them, so-called agent-oriented planning. In this study, we identify three critical design principles of agent-oriented planning, including solvability, completeness, and non-redundancy, to ensure that each sub-task can be effectively resolved, resulting in satisfactory responses to user queries. These principles further inspire us to propose AOP, a novel framework for agent-oriented planning in multi-agent systems, leveraging a fast task decomposition and allocation process followed by an effective and efficient evaluation via a reward model. According to the evaluation results, the meta-agent is also responsible for promptly making necessary adjustments to sub-tasks and scheduling. Besides, we integrate a feedback loop into AOP to further enhance the effectiveness and robustness of such a problem-solving process. Extensive experiments demonstrate the advancement of AOP in solving real-world problems compared to both single-agent systems and existing planning strategies for multi-agent systems. The source code is available at https://github.com/lalaliat/Agent-Oriented-Planning
[ "Multi-Agent System; Planning" ]
Accept (Poster)
https://openreview.net/pdf?id=EqcLAU6gyU
https://openreview.net/forum?id=EqcLAU6gyU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z7qWZF24AK", "yDty7dwN1k", "xo5SRiKCsb", "v1NPr3WEy3", "tVtZdvtlb4", "sBM5zmyvuA", "rburE0jgpJ", "qVtGnihsL7", "mU7Hf9ykWZ", "lcclNd1LiP", "lVrMW7S4uj", "lSgjQEYVOE", "kmlBvwUeJa", "kLrZnkCFbb", "jtKlfUcN6b", "gJzPkC6UA2", "fyxcbcWRai", "ehei0VC5M5", "bWPgS0g5C4", "bNdn3uCXsg", "bKQQXtgGJZ", "bIwvYpaphE", "bEsx1sNF8s", "awvQ4vwhOw", "ZbyhvtNHZG", "ZMzaPQ8kjZ", "VRXz97bfy4", "VQdvOSykZn", "P1J7UGjbdb", "OV1edAK8JE", "Ldw8OyD109", "KjakhOV4AW", "JSxzy9Hj4s", "IDzSS6838R", "HVBmIGZcoi", "ELWKHgLYe5", "DOVUrYWJ6y", "6wxHOz5xc7", "4vfTpPKCdf", "4osV5X6Chj", "0VxJtWFr4f" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "meta_review", "official_comment", "official_comment" ], "note_created": [ 1732641080439, 1732860129573, 1732531130173, 1732531708282, 1733106528042, 1732531292064, 1732705582030, 1732615145766, 1733190472736, 1732530780870, 1730651364588, 1732531499326, 1732860190836, 1733148666459, 1732759099337, 1732569551905, 1732531363089, 1732531522895, 1732530849475, 1733106678446, 1732860236629, 1732531060999, 1732530999242, 1733190423252, 1732531591245, 1730402997010, 1732616579950, 1733151807629, 1732531229173, 1733106623368, 1732705640853, 1730709271950, 1729824350498, 1732616795711, 1732846272373, 1737524242460, 1732531666019, 1730357114409, 1734626642076, 1732679603051, 1732744795140 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13187/Reviewer_vz1o" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Reviewer_LMUm" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Reviewer_LMUm" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Reviewer_4UEG" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Reviewer_AMcZ" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Reviewer_vz1o" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Reviewer_4UEG" ], [ "ICLR.cc/2025/Conference/Submission13187/Reviewer_AMcZ" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Reviewer_8PGi" ], [ "ICLR.cc/2025/Conference/Submission13187/Area_Chair_M8h9" ], [ "ICLR.cc/2025/Conference/Submission13187/Authors" ], [ "ICLR.cc/2025/Conference/Submission13187/Reviewer_AMcZ" ] ], "structured_content_str": [ "{\"title\": \"Thanks for the Rebuttal\", \"comment\": \"Thank you for providing the new results. These results resolve most of my concerns and I would consider increasing the score.\", \"i_have_a_few_more_questions\": \"1. Regarding the feedback loop (Sec. 4.5), my understanding is that the pipeline uses a reward model to obtain a numerical value indicating the quality of the plan. Could you please clarify how to refine the plan (e.g., changing the prompt?) based on a single numerical reward value? Please also let me know if my understanding is incorrect.\\n\\n2. Regarding the reward model, if there are some outliers (bad plans) with high rewards, is there a post-processing technique to detect and resolve such outliers?\"}", "{\"title\": \"Thank you and look forward to further discussion\", \"comment\": \"Dear Reviewer 4UEG:\\n\\n\\nThank you for your detailed comments and helpful suggestions! We are wondering if our responses have resolved your concerns. Please let us know if our response and clarification meet your expectations. We are eager to engage in further discussions and continue improving our work.\\n\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"> Q2: Similarly, why not represent the agent capabilities in terms of planning operators or some other well-defined formalism rather than (what I presume) is natural language descriptions?\\n\\n**Responses**: Thank you for your comments on the descriptions of agent capabilities. \\nRegarding LLM-powered agents in this study, different from domains such as robot planning where PDDL excels, it can be challenging to develop a system that fully explains all agents' capabilities using well-defined formalisms. Therefore, to ensure a fair comparison, we adopt similar description approaches as those used in existing studies on LLM-powered agent systems.\\n\\nWe believe that using planning operators or some other well-defined formalism to represent the agent's capabilities is a promising future direction, but it might be out of the scope of this paper. Thank you again!\\n\\n \\n\\n> Q3: How do you formally verify that the generated plans are correct and solve the problem?\\n\\n**Responses**: Thank you for your comments. Regarding the verification of whether the generated plans are correct and solve the problem, we include two types of evaluations:\\n- Firstly, we execute the generated plans based on the multi-agent system, which includes problem decomposition and task allocation to the corresponding agents, with refinement during execution. The system then produces a response to the user query. We compare the response to the user query, provided by the system, with the ground truth in the dataset to determine whether the problem has been effectively solved or not. Please refer to Table 1 in the paper for such an end-to-end evaluation.\\n- Secondly, we can also directly assess the generated subtasks in terms of solvability, completeness, and non-redundancy. Specifically, we compare the decomposed subtasks provided by our proposed framework to those provided by GPT-4o. These subtasks are assessed by an LLM-based evaluator, providing binary scores for solvability, completeness, and non-redundancy, with scores of 1 for those that meet these principles and 0 for not. The experimental results (averaged scores), as summarized in the following table, show that the proposed framework achieves significant improvements.\\n| | Solvability | Completeness | Non-redundancy | \\n| ----- | ---- | ---- | ---- |\\n| GPT-4o | 0.763 | 0.822 | 0.986 |\\n| Ours | 0.938 | 0.969 | 0.993 |\\n\\nWe have added the above results in Appendix E.5 of the revised paper. Thank you again.\", \"references\": \"[1] Cooperative Multi-Agent Planning: A Survey \\n[2] A survey of research in distributed continual planning \\n[3] From one to many: Planning for loosely coupled multi-agent systems \\n[4] STRIPS: A new approach to the application of theorem proving to problem solving \\n[5] A multi-agent extension of PDDL3.1 \\n[6] PDDL\\u2014The planning domain definition language \\n[7] Deep reinforcement learning with a natural language action space \\n[8] Keep calm and explore: Language models for action generation in text-based games \\n[9] A survey on large language model based autonomous agents \\n[10] Llm+p: Empowering large language models with optimal planning proficiency \\n[11] Dynamic planning with a llm \\n[12] Leveraging pre-trained large language models to construct and utilize world models for model-based task planning \\n[13] Coupling large language models with logic programming for robust and general reasoning from text \\n[14] Keep calm and explore: Language models for action generation in text-based games \\n[15] Swiftsage: A generative agent with fast and slow thinking for complex interactive tasks \\n[16] Translating Natural Language to Planning Goals with Large-Language Models \\n\\n\\n------\\n\\nThank you again for your valuable feedback! We have uploaded a **revised paper that includes all the experiments and discussions in the above responses**, with the major modifications clearly highlighted. \\nWe believe that this submission has been further improved based on your suggestions. We hope these responses can address all your concerns and convince you to lean more towards acceptance of our paper.\"}", "{\"comment\": \"> W5: The experimental evaluation in the paper appears to be limited in scope, as it primarily focuses on a single numerical reasoning dataset.\\n\\n**Responses**: Thank you very much for your suggestions on adding experiments on more datasets! We conduct additional experiments on two datasets, including DROP[1] and IIRC[2], which are suggested and processed by Husky[3]. The experimental results are shown in the table below, which indicate that the proposed framework significantly outperforms the baselines, achieving at least a 5.0\\\\% improvement on both DROP and IIRC, thereby confirming the effectiveness and advancements of the proposed framework.\\n| | DROP (\\\\%)| IIRC (\\\\%)|\\n| ----- | ---- | -------- |\\n| Gpt-4o | 23.0 | 33.0 |\\n| CoT | 26.0 | 35.0 |\\n| Zero-Shot CoT | 24.5 | 33.0 |\\n| Meta-Agent | 25.5 | 32.5 |\\n| Meta-Agent: Traversal | 27.5 | 34.0 |\\n| REACT | 29.0 | 36.0 |\\n| HUSKY | 28.0 | 36.5 |\\n| Ours | **34.0** | **41.5** |\\n\\nWe have added the above experimental results to Appendix E.1 in the revised paper that has been uploaded. Thank you once again for your helpful suggestions for further improving our submission.\", \"references\": \"[1] DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs. \\n[2] IIRC: A Dataset of Incomplete Information Reading Comprehension Questions. \\n[3] Husky: A Unified, Open-Source Language Agent for Multi-Step Reasoning. \\n\\n\\n> Q2: Could you clarify the unique aspects of your reward model and feedback loop in this framework? How do they differ from traditional surrogate models, and what specific contributions do they offer to agent-oriented planning in LLM-based systems?\\n\\n**Responses**: Thank you very much for your comments. Please refer to the above responses to W3.1 and W3.2 for the advances, technical details, and contributions of the reward model and feedback loop in the framework, respectively.\\n\\n\\n------\\n\\nThank you again for your valuable feedback! We have uploaded a **revised paper that includes all the experiments and discussions in the above responses**, with the major modifications clearly highlighted. \\nWe believe that this submission has been further improved based on your suggestions. We hope these responses can address all your concerns and convince you to lean more towards acceptance of our paper.\"}", "{\"title\": \"Look forward to receiving your feedback on the author responses\", \"comment\": \"Dear Reviewer 4UEG,\\n\\nI hope this email finds you well.\\n\\nWe really appreciate your helpful suggestions regarding Figure 2, the ordering of the sections, experiments on more datasets, the generalization ability of the rewards model, and so on. We definitely believe that this submission has been further improved based on your suggestions! As the due of discussion phase is very close, we kindly ask if you could take a moment to review the responses and provide your feedback at your earliest convenience.\\n\\nThank you again for the time and effort you put into reviewing our paper!\"}", "{\"comment\": \"> Q3: In my understanding, the four agents in the experiment are GPT-4o with different input prompts, are there any additional fine-tuning to improve the expertise of each agent?\\n\\n**Responses**: Thank you for your comments. We replace the math agent with Qwen2-Math-7B[1] and the code agent with DeepSeek-Coder-V2[2], and conduct experiments to demonstrate that the proposed framework is compatible with and can be further enhanced by expert agents. Please refer to the responses to W1 for more details.\\n\\n> Q4: The paper talks about solvability, completeness, and non-redundancy at the beginning of the paper. Are there any quantitative results showing that the proposed framework addressed these challenges?\\n\\n**Responses**: Thank you very much for your helpful suggestions! We conduct a quantitative evaluation to demonstrate the effectiveness of the proposed framework in terms of solvability, completeness, and non-redundancy. Specifically, we compare the decomposed subtasks provided by our proposed framework to those provided by GPT-4o. These subtasks are assessed by an LLM-based evaluator, providing binary scores for solvability, completeness, and non-redundancy, with scores of 1 for those that meet these principles and 0 for not. The experimental results (averaged scores), as summarized in the following table, show that the proposed framework achieves significant improvements.\\n| | Solvability | Completeness | Non-redundancy | \\n| ----- | ---- | ---- | ---- |\\n| GPT-4o | 0.763 | 0.822 | 0.986 |\\n| ours | 0.938 | 0.969 | 0.993 |\\n\\nWe have added these comparisons in Appendix E.5 in the revised paper. Thank you again for further improving our submission.\", \"references\": \"[1] https://huggingface.co/Qwen/Qwen2-Math-7B-Instruct. \\n[2] DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence. \\n[3] Husky: A Unified, Open-Source Language Agent for Multi-Step Reasoning. \\n[4] https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct. \\n[5] Measuring Mathematical Problem Solving With the MATH Dataset. \\n[6] https://openai.com/index/learning-to-reason-with-llms\\n\\n\\n------\\n\\nThank you again for your valuable feedback! We have uploaded a **revised paper that includes all the experiments and discussions in the above responses**, with the major modifications clearly highlighted. \\nWe believe that this submission has been further improved based on your suggestions. We hope these responses can address all your concerns and convince you to lean more towards acceptance of our paper.\"}", "{\"title\": \"Responses to your following questions (1/2)\", \"comment\": \"Thank you very much for your reply! We provide the following responses to your questions.\\n\\n> To what extent can each of these principles be fulfilled? How do these principles relate to specific steps in the fast decomposition and allocation process?\\n\\n**Responses**: Thank you for your comments! The fast decomposition and allocation process generates the initial plans that might not satisfy these principles, which would be continuously refined and improved according to the detector and scorer. We conduct a quantitative evaluation to compare the initial planning and its continuous refinement regarding solvability, completeness, and non-redundancy. The subtasks contained in the plans are assessed by an LLM-based evaluator, providing binary scores for solvability, completeness, and non-redundancy, with scores of 1 for those that meet these principles and 0 for those that do not. The experimental results (averaged scores), as summarized in the following table, demonstrate the significant improvements achieved by the refined process in the proposed framework.\\n| | Solvability | Completeness | Non-redundancy | \\n| ----- | ---- | ---- | ---- |\\n| initial plan | 0.763 | 0.822 | 0.986 |\\n| refined plan | 0.938 | 0.969 | 0.993 |\\n\\nWe have added the above experiments in Appendix E.5 in the revised paper. Thank you again! \\n\\n\\n> How much effort should be invested in designing the natural language description, and to what extent does it affect performance? ... Similarly, how are the representative works selected, and what impact do they have on performance?\\n\\n**Responses**: Thank you for your comments on natural language description and representative works. \\n- Regarding natural language description: \\n - In this study, to ensure fair comparisons, we employ the natural language descriptions adapted from Husky [1]. Please refer to Appendix B for detailed descriptions.\\n - To further investigate the effect of natural language descriptions, we provide two different versions: (i) LLM-generated: We instruct the LLM to generate the natural language descriptions based on how we describe the agents in Section 5.1; (ii) Expert-written: The natural language descriptions crafted by a human expert. The comparisons on Husky-QA are shown in the following table. From the table, we can observe that using simple, fully automated natural language descriptions may somewhat affect the overall performance, but still outperforms baselines. Furthermore, incorporating more expert insights into the generation of natural language descriptions can provide an additional performance boost. These experimental results align with the current intuitive understanding of prompt engineering.\\n| | Accuracy (\\\\%) | \\n| ----- | ---- |\\n| GPT-4o | 33.3 | \\n| REACT | 37.6 | \\n| HUSKY | 39.6 | \\n| Ours (Descriptions adapted from Husky) | 43.7 | \\n| Ours (LLM-generated descriptions) | 40.4 | \\n| Ours (Expert-written descriptions) | **44.2** |\\n\\n- Regarding representative works:\\n - For each agent, sub-tasks that receive high scores (according to the same threshold) would be selected as its representative works. \\n - To further investigate the effect of representative works, we compare two more strategies for selecting representative works, including {\\\\it more representative works} (i.e., lowering the threshold), and {\\\\it w/o representative works} at all. The experimental results are shown in the following table. From the table, we can see that the strategy we adopted for selecting representative works in our paper is effective, and the representative works mechanism achieves robust performance, consistently outperforming baseline methods.\\n| | Accuracy (\\\\%) | \\n| ----- | ---- |\\n| GPT-4o | 33.3 | \\n| REACT | 37.6 | \\n| HUSKY | 39.6 | \\n| Ours | **43.7** | \\n| more representative works | 40.7 |\\n| w/o representative works | 31.8 |\\n\\nWe have added the above experiments and discussions in Appendix E.9 and E.10 in the revised paper. Thank you again!\", \"references\": \"[1] Husky: A Unified, Open-Source Language Agent for Multi-Step Reasoning.\"}", "{\"title\": \"Thank the authors for their rebuttals\", \"comment\": \"Like reviewer AMcZ I would like to thank the authors for the rebuttals which I will consider in the discussion.\"}", "{\"title\": \"Look forward to receiving your feedback\", \"comment\": \"Dear Reviewer 8PGi,\\n\\nAs the discussion phase draws to a close in less than a day, we kindly ask if our responses have addressed your concerns. We look forward to receiving your feedback.\\n\\nThank you again for the time and effort you have invested in reviewing our paper!\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"We sincerely appreciate your detailed comments and valuable suggestions! We provide the following responses to address your concerns and answer your questions point by point.\\n\\n\\n> W1: I believe it can be made much clearer, especially with regards to Figure 2. It is difficult to understand this diagram without enough direct context.\\n\\n**Responses**: Thank you for your valuable suggestions regarding Figure 2. We have updated the figure to make it clearer and easier to follow, and we have also added a detailed caption: \\n\\nOverall architecture of the proposed agent-oriented planning framework. The framework begins with the meta-agent performing a fast decomposition and allocation of the received user query, resulting in an initial plan. A detector is employed to improve the completeness and eliminate redundancy of the plan, while a reward model provides scores to guide the meta-agent in refining the plan further, which involves operations such as replan, plan-in-detail, etc. The refined plan is sent back to the multi-agent system for generating responses to the user query.\\n\\nThese modifications have been incorporated into the revised paper that has been uploaded. We appreciate your helpful suggestions for further improving our submission. Thank you once again!\\n\\n\\n> W2 \\\\& Q3: The ordering of the sections, particularly the related work, and how it reads in context at the end, does not make sense.\\n\\n**Responses**: Thank you for your suggestions regarding the ordering of the sections. In the revised paper, we have moved the Related Work section to follow the Introduction and precede the Preliminaries section. We believe this arrangement would make the submission more reader-friendly and clear.\\n\\n\\n> W3 \\\\& Q1: The results only have one dataset to my understanding. Can this be extended to more datasets?\\n\\n**Responses**: Thank you very much for your valuable suggestions! We conduct additional experiments on two datasets, including DROP[1] and IIRC[2], which are suggested and processed by Husky[3]. The experimental results are shown in the table below, which indicate that the proposed framework significantly outperforms the baselines, achieving at least a 5.0\\\\% improvement on both DROP and IIRC, thereby confirming the effectiveness and advancements of the proposed framework.\\n| | DROP (\\\\%)| IIRC (\\\\%)|\\n| ----- | ---- | -------- |\\n| Gpt-4o | 23.0 | 33.0 |\\n| CoT | 26.0 | 35.0 |\\n| Zero-Shot CoT | 24.5 | 33.0 |\\n| Meta-Agent | 25.5 | 32.5 |\\n| Meta-Agent: Traversal | 27.5 | 34.0 |\\n| REACT | 29.0 | 36.0 |\\n| HUSKY | 28.0 | 36.5 |\\n| Ours | **34.0** | **41.5** |\\n\\nWe have added the above experimental results in Appendix E.1 in the revised paper that has been uploaded. Thank you once again for your helpful suggestions for further improving our submission.\", \"references\": \"[1] DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs. \\n[2] IIRC: A Dataset of Incomplete Information Reading Comprehension Questions. \\n[3] Husky: A Unified, Open-Source Language Agent for Multi-Step Reasoning. \\n\\n\\n\\n> W4 \\\\& Q2: I have concerns that the rewards model does not generalize outside of this dataset? Or would that simply require more training on bigger datasets?\\n\\n**Responses**: Thanks a lot for your comments. To evaluate the generalization capability of the reward model, we train a reward model based on Husky-QA and utilize it in experiments on the DROP and IIRC datasets. As shown in the table below, the reward model trained on one dataset demonstrates good generalization to other datasets (though it experiences a slight performance drop compared to the specifically trained reward model), achieving superior performance compared to the two strongest baselines.\\n| | DROP (\\\\%)| IIRC (\\\\%)|\\n| ----- | ---- | -------- |\\n| REACT | 29.0 | 36.0 |\\n| HUSKY | 28.0 | 36.5 |\\n| Ours (dataset-specified reward model) | 34.0 | 41.5 |\\n| Ours (reward model trained on Husky-QA) | 32.0 | 39.0 |\\n\\nWe have added the above experimental results in Appendix E.2 to the revised paper. Hope these responses can well address your concerns about the generalization of the reward model.\"}", "{\"summary\": \"The proposed system uses a LLM to generate a decomposition of a problem into an array of sub tasks that are then allocated to different agents. Then a reward model is used to improve the plan as it is being executed.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The problem of multi-agent planning is highly important with high potential impact.\", \"How to decompose problems into sub-tasks based on the capabilities of the individual agents is often the key problem.\", \"Learning the capabilities of other agents is a hard and important challenge.\", \"The paper is easy to follow.\"], \"weaknesses\": [\"The paper does not relate to the rich and long history of multi-agent planning. A starting point could be [1].\", \"The design principles seem a bit backwards. Normally, the set of problems that can be solved by a set of agents is defined as the union of the problems that each agent can solve, then you have to check whether the particular problem is within that set. Normally, it is a hard computational problem to determine what problems an agent can solve. If the set of sub-tasks contain redundancy it is either a problem with the planner or there is reason, such as a need to deal with non-deterministic outcomes of actions. To have it as a design principle seems strange.\", \"In the completeness principle it is mentioned \\\"The array of sub-tasks [...] should include all necessary information\\\" what does this mean? The first principled is defined in terms \\\"resolvable\\\" tasks. The second principle is defined in terms of \\\"necessary information\\\". What is the relation between \\\"necessary information\\\" and \\\"resolvable\\\"?\", \"It is well known that LLMs cannot generate plans in the sense used in the planning community as there are no guarantees that the plan is correct nor that it actually solves the problem. My impression is thus that the proposed solution is more about learning a model of different agents capabilities than on planning.\", \"[1] Cooperative Multi-Agent Planning: A Survey. Alejandro Torre\\u00f1o, Eva Onaindia, Anton\\u00edn Komenda, Michal \\u0160tolba. ACM Computing Surveys (CSUR), Volume 50, Issue 6.\"], \"questions\": [\"Why is it beneficial to use natural language over PDDL and other languages specifically designed for planning? There are papers on this already [2] so it would make sense to compare to such an approach.\", \"Similarly, why not represent the agent capabilities in terms of planning operators or some other well-defined formalism rather than (what I presume) is natural language descriptions?\", \"How do you formally verify that the generated plans are correct and solve the problem?\", \"[2] Translating Natural Language to Planning Goals with Large-Language Models by Yaqi Xie et al.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> Q4: As the number of agents and interdependent tasks increase, how does the meta-agent manage complexity? Are there limitations on the scale or number of agents?\\n\\n**Responses**: Thank you for your comments. \\n- The proposed framework has no inherent limitations on the number of agents. However, as the meta-agent is powered by LLMs, increasing the number of agents is subject to the contextual window length restrictions of LLMs (descriptions of a large number of agents could exceed the LLM's output length limit, such as 128K tokens), and the effectiveness of LLMs could be affected by the ability to handle long contexts.\\n- In practical applications, it is important to consider that as the number of agents increases, there may be redundant and functionally similar agents, which motivates us to apply some extension strategies to handle. For example, agents can be grouped according to their abilities. When the meta-agent allocates sub-tasks, it can first select an agent group for each sub-task and then further choose the most suitable agent within the group. This strategy respects the LLM's context window limits and enhances the accuracy of agent selection.\\n- We also conduct an experiment according to the above strategy, which increases the amount of the math agents and code agents to 4 respectively. The performance on Husky-QA is 46.2\\\\%, which is higher than reported results that only involve one math agent and one code agent, due to the addition and management of expert agents. These experimental results show that the above strategy can effectively handle the increasing number of agents. \\n\\nThe above discussions and experiments are added to Appendix F.2 in the revised paper. Thank you again for your helpful suggestions for further improving our submission. \\n\\n\\n> Q5: how are these capabilities/descriptions in D represented? Can you provide more details?\", \"q6\": \"Related to the comment above, can you discuss what representation is best suited for agent descriptions? Are these representations task dependent?\", \"q7\": \"Another follow-up on the above comment, can you discuss about the process/requirements if these representations need to be verified or even updated through the process? How can the approach be adapted to handle such cases?\\n\\n**Responses**: Thank you very much for your insightful comments and helpful suggestions on the descriptions of agents. We provide a comprehensive response to all these questions.\\n- *Regarding the descriptions of agents*: In this study, we employ a combination of predefined natural language descriptions and representative works as the representations of agents\\u2019 descriptions: \\n - A natural language description can be manually provided or automatically generated, detailing the general and task-independent abilities of agents. For example, a description for a code agent can be ``This agent is proficient at writing and executing Python code to solve the tasks''. While providing a comprehensive and detailed natural language description can be beneficial, it also requires effective prompt engineering. In this study, to ensure fair comparisons, we employ simple natural language descriptions for agents similar to those used in previous studies, which can be found in Appendix B.\\n - The representative works consist of tasks that the agent has effectively tackled in the past. These representative works complement the natural language descriptions and are often task-dependent, allowing for continuous updates during execution.\\n- *What representation is best*: We conducted experiments on Husky-QA to compare the effectiveness of different representation approaches. The results, shown in the following table, indicate that using natural language descriptions achieves significantly superior performance compared to using only representative works, which motivates the majority of existing studies to adopt natural language descriptions. Besides, incorporating task-specific representative works on top of natural language descriptions leads to a further 11.9\\\\% performance boost, demonstrating the effectiveness of our proposed combined representation approach.\\n| | Accuracy |\\n| ----- | ---- |\\n| Natural Language Descriptions | 31.8\\\\% |\\n| Representative Works | 16.8\\\\% |\\n| Both | 43.7\\\\% |\\n\\n- *When representations need to be verified or even updated*: Recent studies[1,2] have explored the update of the natural language descriptions of agents, which are orthogonal to this study and can be a promising future direction. For the representative works, their design inherently supports verification and updates.\\n\\nThe above discussions and experiments on the descriptions are added to Appendix E.7 in the revised paper. Thank you very much for your helpful suggestions!\", \"references\": \"[1] Chameleon: Plug-and-play compositional reasoning with large language models \\n[2] Self-rag: Learning to retrieve, generate, and critique through self-reflection\"}", "{\"title\": \"Thank you and look forward to further discussion\", \"comment\": \"Dear Reviewer LMUm:\\n\\nThank you for your detailed comments and helpful suggestions! We are wondering if our responses have resolved your concerns. Please let us know if our response and clarification meet your expectations. We are eager to engage in further discussions and continue improving our work.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Thank you, the edits you have made do indeed make the paper far more reader friendly.\\n\\nI believe that the further experiments with the DROP and IIRC datasets do add to the idea that this model can generalize outside of the dataset it is specifically trained for, showing it does improve on other methods. This does ease my concerns with regards to the generalization.\\n\\nThank you for clarifying with regards to the scorers.\"}", "{\"title\": \"Thank you for your reply\", \"comment\": \"We sincerely appreciate your positive feedback and your decision to raise the score! Thank you again for the time and effort you put into reviewing our paper!\"}", "{\"title\": \"Thank the authors for their rebuttals.\", \"comment\": \"I sincerely thank the authors for providing rebuttals for my review. In line with the rebuttals, I have the following related concerns and suggestions that hopefully could be further addressed.\", \"response_to_q1\": \"I am interested in empirical studies that examine how initial planning and its continuous refinement can satisfy the principles of solvability, completeness, and non-redundancy. To what extent can each of these principles be fulfilled? How do these principles relate to specific steps in the fast decomposition and allocation process?\", \"response_to_w2\": \"How much effort should be invested in designing the natural language description, and to what extent does it affect performance? Could you provide additional experimental results to support this? Similarly, how are the representative works selected, and what impact do they have on performance? Additional empirical results would be helpful to substantiate your argument.\", \"response_to_w4_and_q3\": \"Thanks for the explanation. However, I am still uncertain about the actual effort required for parameter fine-tuning and its practical implications. More experiments may be necessary to address this concern.\\n\\nResponse to W3.1:\\n\\nSeveral existing studies have explored the possibility of predicting the performance of Large Language Models, such as:\\n\\nZhang, Q., Lyu, F., Liu, X., & Ma, C. (2024). Collaborative Performance Prediction for Large Language Models. arXiv preprint arXiv:2407.01300.\\nOwen, D. (2024). How predictable is language model benchmark performance? arXiv preprint arXiv:2401.04757.\\nYe, Q., Fu, H. Y., Ren, X., & Jia, R. (2023). How Predictable Are Large Language Model Capabilities? A Case Study on BIG-bench. arXiv preprint arXiv:2305.14947.\\n\\nHence, I think some previously studied methods (not just those listed above) may serve as alternatives to the reward model. Could additional ablation studies be conducted to demonstrate the advantages of the new reward model compared to existing ones?\"}", "{\"comment\": \"We sincerely appreciate your detailed comments and valuable suggestions! We provide the following responses to address your concerns and answer your questions point by point.\\n\\n\\n> Q1: Task dependencies are identified during decomposition, but dependencies may also emerge during execution. How does the framework adapt when unforeseen dependencies between sub-tasks arise?\\n\\n**Responses**: Thank you very much for the thoughtful comments on the unforeseen dependencies during execution.\\n- We agree that unforeseen dependencies can impact the execution of sub-tasks. In the proposed framework, any missing dependencies, whether they arise during decomposition or execution, are considered part of the completeness of subtasks and are identified by a detector. Specifically, before a sub-task is assigned to an agent for execution, the detector is required to determine whether there are any additional dependencies needed beyond those that were identified during decomposition (i.e., unforeseen dependencies). If such dependencies exist, the execution results of these dependent sub-tasks must also be provided as inputs.\\n- We also conduct an experiment to evaluate the effectiveness of the proposed improvements of the detector. The results demonstrate a 2.5\\\\% performance increase in accuracy (from 43.7\\\\% to 46.2\\\\%). More experimental settings (such as used prompts) are provided in Appendix E.6 of the revised paper. \\n\\nWe have added the above experimental results and discussions in Section 4.4 and Appendix E.6 to the revised paper that has been uploaded. Thank you once again for your helpful suggestions for further improving our submission. \\n\\n\\n> Q2: How does this approach handle unexpected changes in agent availability or evolving task requirements without needing a complete re-plan?\\n\\n**Responses**: Thank you very much for the insightful comments on the unexpected changes in agent availability. We categorize changes in agent availability into two situations: those occurring during non-execution periods and those during execution.\\n- *During non-execution periods*: The proposed framework allows for the addition or removal of agents. The meta-agent can first broadcast a simple sync signal to determine agent availability. Only available agents are provided to the meta-agent for task allocation. \\n- *During execution*: Changes in agent availability during execution indicate that an agent selected for a task may unexpectedly become unavailable, leading to the meta-agent not receiving a response to this sub-task. In such scenarios, the meta-agent is required to reassign the sub-task to another agent with similar capabilities (which relates to responses to Q3 regarding backup agents) or to further decompose the task (i.e., plan-in-detail).\\n\\nThe above discussions on fault tolerance concerning agent availability have been added to Appendix F.1 in the revised paper. Thank you again for your valuable suggestions!\\n\\n\\n> Q3: Non-redundancy is a key focus, yet redundancy can be valuable for fault tolerance. How does the framework balance efficiency with the potential need for redundancy, particularly in scenarios where backup solutions could be essential?\\n\\n**Responses**: Thank you for your comments on redundancy.\\n- On one hand, we agree that adding redundancy among agents is necessary for fault tolerance, especially considering unexpected changes in agent availability (as you mentioned in responses to Q2).\\n- On the other hand, we believe that redundancy between sub-tasks should be optimized, which implies that repeated execution of the same operations across sub-tasks should be minimized as much as possible. For example, if the melting point of a particular metal has already been queried in a sub-task, this knowledge should be directly utilized if needed, rather than being queried again in another sub-task. \\n\\nWe have included additional discussions on non-redundancy in Section 3 of the revised paper to make it clearer and more comprehensive. Thank you again for your suggestion!\"}", "{\"comment\": \"> Q8: While the feedback loop allows for ongoing improvements, how does it prevent instability or oscillations in planning strategies? How would you introduce safeguards into the system or meta-agent to avoid fluctuating between different planning approaches?\\n\\n**Responses**: Thank you very much for your comments.\\nFirstly, when the meta-agent performs task allocations, it starts by referencing the natural language descriptions of agents, as discussed in the responses to Q5-Q7. These descriptions, which outline the general abilities of agents, can help prevent instability to some extent. Secondly, in the feedback loop, we can incorporate some mechanisms, such as comparing the similarity to existing representative works, to help filter out redundant ones or outliers, further ensuring system stability. \\n\\n\\n------\\n\\nThank you again for your valuable feedback! We have uploaded a **revised paper that includes all the experiments and discussions in the above responses**, with the major modifications clearly highlighted. \\nWe believe that this submission has been further improved based on your suggestions. We hope these responses can address all your concerns and convince you to lean more towards acceptance of our paper.\"}", "{\"comment\": \"> W5: I believe this work is good and novel, however it could be improved upon with clearer writing.\\n\\n**Responses**: Thank you for your suggestions! We have carefully polished our paper, including re-arranging the content, correcting typos, and enhancing the descriptions to make them more logical and reader-friendly. All these improvements have been incorporated into the revised paper that has been uploaded. Thank you once again!\\n\\n\\n> Q4: Are there cases where the scorers are human experts? It was a bit ambiguous as to whether this was in fact the case.\\n\\n**Responses**: Thank you for your comments. In this study, we investigate both model-based scorers and human-expert-based scorers, and provide a comparison in Table 3 of the paper. We find that using human-expert-based scorers can lead to further improvements in the overall framework, which can be attributed to their annotations being better aligned with human understanding. However, we predominantly adopt a model-based scorer (using GPT-4o) in most of the experiments, as it is a more cost-effective and generalizable manner, showing that the proposed framework does not heavily rely on human experts. \\n\\nThank you again for your comments. We have included the above discussions in Section 5.3 in the revised paper accordingly.\\n\\n\\n------\\n\\nThank you again for your valuable feedback! We have uploaded a **revised paper that includes all the experiments and discussions in the above responses**, with the major modifications clearly highlighted. \\nWe believe that this submission has been further improved based on your suggestions. We hope these responses can address all your concerns and convince you to lean more towards acceptance of our paper.\"}", "{\"title\": \"Look forward to receiving your feedback on the author responses\", \"comment\": \"Dear Reviewer 8PGi,\\n\\nI hope this email finds you well.\\n\\nWe really appreciate your helpful suggestions regarding the unforeseen dependencies, unexpected changes in agent availability, fault tolerance, the descriptions of agents, the feedback loop, and so on. We definitely believe that this submission has been further improved based on your suggestions! As the due of discussion phase is very close, we kindly ask if you could take a moment to review the responses and provide your feedback at your earliest convenience.\\n\\nThank you again for the time and effort you put into reviewing our paper!\"}", "{\"title\": \"Thank you and look forward to further discussion\", \"comment\": \"Dear Reviewer 8PGi:\\n\\nThank you for your detailed comments and helpful suggestions! We are wondering if our responses have resolved your concerns. Please let us know if our response and clarification meet your expectations. We are eager to engage in further discussions and continue improving our work.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"> W3: In the completeness principle it is mentioned ``The array of sub-tasks [...] should include all necessary information'', what does this mean? What is the relation between ``necessary information'' and ``resolvable''?\\n\\n**Responses**: Thank you for your comments.\\n- *Regarding the completeness principle*: The completeness principle requires that all necessary (i.e., critical) information from the user query be preserved in the decomposed sub-tasks, including essential nouns, quantifiers, and other critical elements. These pieces of information may be distributed across different subtasks. While a subtask might include only some pieces of information from the user query, it is not allowable for any particular piece of critical information to be omitted from all subtasks. The instructions given to the detector for improving the completeness can be found in Appendix A.5 of the paper.\\n- *Regarding the relation between necessary information and resolvability*: For a subtask, missing necessary information would make it irresolvable. For example, consider the subtask: \\\"Determine the number of full flights needed to transport 1% of the population of New York.\\\" This subtask is not resolvable because it lacks the necessary information given in the user query: \\u201c300-passenger capacity\\u201d.\\n\\nWe have made the description of the completeness principle more clear in the revised paper. Thank you again for your comments.\\n \\n\\n> W4: It is well known that LLMs cannot generate plans in the sense used in the planning community as there are no guarantees that the plan is correct nor that it actually solves the problem. My impression is thus that the proposed solution is more about learning a model of different agents capabilities than on planning.\\n\\n**Responses**: Thank you for your comments. \\nWe agree that LLMs might not always generate reasonable plans with a simple inference, especially without well-designed instructions to promote productive thinking, reasoning, and reflection. This challenge motivates us to propose a framework that uses more inference computation and reward signals to continually refine the plans automatically. While learning different agent capabilities is a critical problem in agent-oriented planning, as discussed in the preliminaries section, enhancing a multi-agent system to effectively perform and refine task decomposition and allocation is also a central focus of the proposed framework, aiming at achieving performance boost compared to systems that rely on simple inference with LLMs.\\n\\n\\nThank you again for your comments. Hope the above responses about the scope of this paper can well address your concerns.\\n\\n\\n> Q1: Why is it beneficial to use natural language over PDDL and other languages specifically designed for planning? \\n\\n**Responses**: Thank you for your comments! \\nWhile recent studies propose to leverage external planners and utilize LLMs to transform natural language to other languages like PDDL, several challenges remain, such as:\\n\\n- PDDL has limited capability in handling continuous actions/states, uncertain effects, or incomplete information. When combining LLMs with PDDL, these methods show mixed performance on tasks with partially specified goals. PDDL and similar languages are typically focused on domain-specific problems like robot path planning. They lack adaptability when addressing more ambiguous, broad questions such as summarizing a company's recent performance. In such cases, we find that natural language question-answering scenarios cannot be effectively formulated within this framework to achieve satisfactory results.\\n- LLM-generated PDDL information, particularly goals, is sensitive to natural language prompts. Furthermore, LLMs require sufficiently good examples to generalize well. Poor examples significantly impact LLM-generated results, leading to substantial upfront preparation requirements and limited generalization capabilities, especially when designing task-specific prompts. This phenomenon has been mentioned in works combining LLMs with PDDL [10, 16].\\n\\nThe development of LLMs is progressing rapidly but is still far from perfect. In this study, **we focus more on expressing plans in natural language to better leverage LLMs' understanding and generation capabilities in the natural language**.\\n\\nThank you again for your comments. We agree that the combination of LLM and PDDL can achieve excellent results given adequate design, which can be a promising future direction.\"}", "{\"comment\": \"We sincerely appreciate your detailed comments and valuable suggestions! We provide the following responses to address your concerns and answer your questions point by point.\\n\\n\\n> W1: The paper does not relate to the rich and long history of multi-agent planning. A starting point could be [1].\\n\\n**Responses**: Thank you for your valuable suggestions regarding the related work.\", \"we_provide_additional_discussions_related_to_the_studies_in_cooperative_multi_agent_planning_as_you_mentioned\": \"Cooperative multi-agent planning (MAP) has been an active research area for many years [1]. While early MAP works focus on coordination methods for agents using planning representations [2], a significant turning point is the introduction of MA-STRIPS [3], a minimalist multi-agent extension of the well-known STRIPS planning model [4], which provided a widely accepted standardized format for MAP. Following this, MA-PDDL [5], the multi-agent version of PDDL [6], marked the first attempt to create a de facto standard specification language for MAP tasks. Both symbolic methods and reinforcement learning-based methods [7,8] have become mainstream approaches in MAP. In recent years, the development of LLMs has brought considerable attention to LLM-empowered agents [9]. Some methods have enhanced the planning proficiency of LLMs by incorporating a planner [10-13], while others have explored combining an LLM with a lightweight neural planner [14,15]. These developments have injected new vitality into the advancement of MAP. \\n\\nThe above discussions have been added to the Related Work section in the revised paper that has been uploaded. We appreciate your helpful suggestions for further improving our submission. Thank you again!\\n\\n\\n> W2: The design principles seem a bit backwards. ... it is a hard computational problem to determine what problems an agent can solve. If the set of sub-tasks contain redundancy it is either a problem with the planner or there is reason, such as a need to deal with non-deterministic outcomes of actions. \\n\\n**Responses**: Thank you for your insightful comments regarding the design principles. Overall, we do not expect the meta-plan to satisfy these principles through a single and simple LLM inference (as you mentioned above, this can be really challenging). Instead, **these principles serve as targets guiding us on how to design strategies and mechanisms to enable LLMs to generate more productive thinking, reasoning, and reflection**, thereby resulting in responses that meet these principles. \\nFor more detailed responses, please see below:\\n\\n- As you mentioned above, the problems an agent can solve are hard to determine, especially when relying on a simple inference of LLMs. In the proposed framework, this manifests as uncertainty in the agent's capabilities, leading to the initial plan generated by the fast decomposition and allocation process potentially being unsatisfactory. To address this, we incorporate a reward model to provide feedback signals for the meta-agent to refine the plans and design a representative works mechanism to further enhance the descriptions of agent capabilities.\\n- The primary purpose of the non-redundancy principle is to optimize efficiency while ensuring the plans lead to correct responses. Redundancy could be considered a problem with the planner, as it is challenging to ensure the generated plans are non-redundant during fast decomposition. That is why we need to adopt a detector to further refine subtasks. The experiments shown in Table 2 of the submission confirm that the detector can bring a 7.1% improvement.\\n- Furthermore, we acknowledge the need to address non-deterministic outcomes of actions in some cases. For instance, initial uncertainty about an agent's capabilities might necessitate redundant calls to ensure subtask execution quality. However, as more tasks are executed and the system's understanding of the agent's capabilities becomes clearer, the system can perform much better in generating plans without redundancy.\\n\\nWe hope these responses address your concerns about the design principles. Thank you again for your comments!\"}", "{\"title\": \"Look forward to receiving your feedback\", \"comment\": \"Dear Reviewer LMUm,\\n\\nAs the discussion phase draws to a close in less than a day, we kindly ask if our responses have addressed your concerns. We look forward to receiving your feedback.\\n\\nThank you again for the time and effort you have invested in reviewing our paper!\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"We sincerely appreciate your detailed comments and valuable suggestions! We provide the following responses to address your concerns and answer your questions point by point.\\n\\n> W1 \\\\& Q1: This paper could benefit from a more detailed discussion of how these principles are uniquely applied in the context of LLM based multi-agent systems.\\n\\n**Responses**: Thank you for your detailed comments and helpful suggestions!\\n- Designing an agent-oriented planning framework that adheres to these principles is indeed non-trivial and brings unique challenges. The reasons are twofold. Firstly, the meta-agent has a limited understanding of the abilities of expert agents in the multi-agent systems, which can significantly impact how effectively the meta-agent decomposes and allocates tasks. Secondly, LLM-powered agents, including both expert and meta-agents, are still far from perfect.\\n- To apply these principles in the context of LLM-based multi-agent systems, the proposed framework includes a fast decomposition and allocation process to generate initial planning. Such initial planning would be continuously refined and improved according to the detector and scorer to satisfy solvability, completeness, and non-redundancy. And a feedback loop is also established to enhance the representations of the abilities of expert agents. \\n- While these principles may appear general in nature, our work, to the best of our knowledge, **is the first one that proposes explicit specification and systematic design that follows these principles** in the domain of LLM-based multi-agent systems. We hope that these principles, along with the framework we have outlined, can serve as a good standing point and inspire further research in this field. \\n\\nThe above discussions have been incorporated into Section 3 of the revised paper that has been uploaded. Thank you again for your suggestions for improving our submission.\\n\\n\\n\\n> W2: There seems to be a clear lack of technical details regarding how each agent is constructed, operates, and integrates with the meta-agent... Without these technical details, the framework\\u2019s effectiveness remains partially speculative, relying more on theoretical design principles than on concrete, replicable methods.\\n\\n**Responses**: Thank you for your comments. For more technical details regarding agents:\\n- *How agents are constructed and operate*: LLM-powered agents are constructed by providing specialized tools and unique system prompts that set their identity and instructions. These agents are powered by LLMs for query understanding, tool use, and response generation. For example, a search agent would be enabled with search engine APIs and instructed to call these APIs to obtain up-to-date information, while the code agent would have access to the code interpreter and be instructed to write and execute code for solving problems.\\n- *How agents integrate with the meta-agent*: The meta-agent serves as the brain and controller of the multi-agent system, which allocates subtasks to the agents by sending messages. Once an agent completes its task, it returns the results to the meta-agent in a similar message flow. \\n- *The descriptions of agent capabilities*: In this study, we employ a combination of predefined natural language descriptions and representative works. A natural language description can be manually provided or automatically generated, detailing the general and task-independent abilities of agents. The representative works consist of tasks that the agent has effectively tackled in the past, which complement the natural language descriptions and are often task-dependent, allowing for continuous updates during execution.\\n- *Replicable*: All technical details, including the adopted system prompts and the LLMs, are included in Appendix A and B in the submission. Besides, we will release the source code to promote further research in the community. \\n \\nWe have incorporated the above technical details and made them more clear in the revised paper. Thank you again!\"}", "{\"summary\": \"The paper presents a multi-agent framework for problem-solving. In particular, the framework uses a meta-agent to decompose a task into subtasks and assign each subtask to downstream expert agents based on their expertise. Then, it uses a reward model to evaluate the performance of the expert agents and use the rewards as signals for re-planning. Additionally, the framework incorporates a feedback system to further improve the robustness and accuracy of problem-solving. Empirical analysis over a numerical reasoning dataset indicates how the proposed framework outperforms the benchmarks, such as direct query to GPT-4o and chain-of-thought. Additionally, a set of ablation studies necessitates each component, such as the plan detector and the reward model, within the framework.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The paper is well-organized, presenting a detailed and thorough explanation of the proposed framework. A particularly compelling aspect is the approach of breaking down tasks into smaller components and assigning them to specialized agents according to their areas of expertise. This decomposition allows each agent to handle tasks within its domain, enhancing overall efficiency and precision. Additionally, the integration of a feedback system significantly boosts the framework\\u2019s performance by refining the agents\\u2019 actions and ensuring adaptive learning.\", \"weaknesses\": \"There are several limitations of the work that the authors may consider to address:\\n\\n1. Expert agents: the difference between each expert agent is their input prompts, but they are using the same underlying model (GPT-4o). It could be more interesting to replace each expert agent with the current state-of-the-art model in their domain. I believe there are plenty of works to fine-tune the language models for code generation, solving math problems, etc.\\n\\n2. I don't see the significance of the commonsense agent. Feels like the meta agent is also doing some common sense reasoning (e.g., understanding and decomposing tasks). If possible, please provide some use cases where the commonsense agent is necessary.\\n\\n3. The cost (time and number of tokens) is high compared to direct querying GPT-4o, given that over 5x cost for only a 10 percent improvement. This probably can be solved by optimizing the expert agents (see point 1).\", \"questions\": \"1. In Table 1, in my understanding, the prompt tokens and completion tokens refer to the AVERAGE NUMBER of tokens for EACH task, is it correct? Probably it's better to clarify this in the paper.\\n\\n2. Is this framework capable of collaborating with multiple agents with the same expertise and improving efficiency? For example, using multiple math agents to solve a complex math problem, while these agents run in parallel.\\n\\n3. In my understanding, the four agents in the experiment are GPT-4o with different input prompts, are there any additional fine-tuning to improve the expertise of each agent?\\n\\n4. The paper talks about solvability, completeness, and non-redundancy at the beginning of the paper. Are there any quantitative results showing that the proposed framework addressed these challenges?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you very much for your reply\", \"comment\": \"To Reviewer LMUm:\\n\\nThank you very much for your reply! \\nPlease let us know if these responses meet your expectations. We are eager to engage in further discussions and continue improving our work.\"}", "{\"title\": \"Thank you very much for your reply\", \"comment\": \"Thank you very much for your reply and your decision to raise the score! Your positive feedback means a great deal to us, and we are glad that our responses have addressed your concerns.\\n\\nWe appreciate your time and effort in reviewing our paper. Thank you again!\"}", "{\"comment\": \"We sincerely appreciate your detailed comments and valuable suggestions! We provide the following responses to address your concerns and answer your questions point by point.\\n\\n\\n> W1: It could be more interesting to replace each expert agent with the current state-of-the-art model in their domain.\\n\\n**Responses**: Thank you very much for your valuable suggestions regarding the expert agents.\\n- We replace the math agent with Qwen2-Math-7B[1] and the code agent with DeepSeek-Coder-V2[2], and conduct experiments on Husky-QA. The experimental results are summarized in the following table, demonstrating that the proposed framework is compatible with and can be further enhanced by expert agents.\\n| | Accuracy |\\n| ----- | ---- |\\n| Agents employing GPT-4o | 43.7\\\\% |\\n| Expert agents | 45.5\\\\% |\\n\\n- It is worth noting that the proposed framework does not bring any constraints to LLMs that can be employed as the backbone of agents. \\n\\nThe above discussions and experimental results are added to Appendix E.3 in the revised paper that has been uploaded. Thank you again for your suggestions on the expert agents!\\n\\n\\n> W2: If possible, please provide some use cases where the commonsense agent is necessary.\\n\\n**Responses**: Thank you for your comments on the commonsense agent. Following the previous study[3], the commonsense agent specializes in solving subtasks that require commonsense knowledge, which is different from what meta-agent does. For example, a subtask ``Determine the melting point of gold and silver'' can be handled by a commonsense agent since it owns the commonsense knowledge of metal melting points.\\n\\nWe have provided better explanations and examples of agents to make them more clear in the revised paper. Thank you again!\\n\\n\\n> W3: The cost (time and number of tokens) is high compared to direct querying GPT-4o, given that over 5x cost for only a 10 percent improvement. This probably can be solved by optimizing the expert agents (see point 1).\\n\\n**Responses**: Thank you very much for your insightful comments. \\n- We agree that using expert agents can, to some extent, reduce the time and number of tokens cost of the proposed framework, since more complex subtasks can be handled by a single expert agent. In the above responses to W1, we have also demonstrated that the proposed framework is compatible with expert agents.\\n- In Table 1 of the submission, we aim to highlight that, under a fair comparison, our framework achieves significant performance improvements compared to others, while **the cost of our framework is at the same level compared to existing multi-agent systems and is notably lower than approaches that lacking suitable design**, such as Meta-Agent: Traversal. Note that these complex queries cannot be resolved by a single expert agent alone, which can only complete the subtasks they specialize in.\\n- These additional costs, which include task decomposition, allocation, and modifications carried out by the meta-agent, are affordable and **can be worthwhile as long as they bring significant improvements in accuracy and stability when applying real-world applications**. Such an exploration aligns with recent studies [6] in inference time computation, aimed at efficiently utilizing more tokens to resolve tasks that a single inference cannot fulfill. \\n\\nWe have highlighted the above discussions on cost and utility in Section 5.2 in the revised paper. Thank you again!\\n\\n\\n\\n> Q1: In Table 1, in my understanding, the prompt tokens and completion tokens refer to the AVERAGE NUMBER of tokens for EACH task, is it correct? Probably it's better to clarify this in the paper.\\n\\n**Responses**: Thank you for your comments. The prompt tokens and completion tokens refer to the total costs of the whole test data. We have made it clear in the revised paper.\\n\\n\\n> Q2: Is this framework capable of collaborating with multiple agents with the same expertise and improving efficiency? For example, using multiple math agents to solve a complex math problem, while these agents run in parallel.\\n\\n**Responses**: Thank you for your helpful suggestion!\\nBased on the proposed framework, we set up an experiment involving 4 different math agents, using GPT-3.5, GPT-4o, Qwen2-Math-7B-Instruct, and Llama-3.2-3B[4], respectively. We instruct the multi-agent system to solve the complex math problem from MATH[5]. With the proposed framework, the queries are decomposed into multiple queries and these agents run in parallel to resolve them. The experimental results are shown in the following table, demonstrating the effectiveness (at least 6\\\\% improvements) of the proposed framework when applied in the suggested scenario.\\n| | Accuracy |\\n| ----- | ---- |\\n| GPT-3.5 | 43\\\\% |\\n| GPT-4o | 62\\\\% |\\n| Qwen2-Math-7B-Instruct | 66\\\\% |\\n| Llama-3.2-3B | 36\\\\% |\\n| ours | 72\\\\% |\\n\\nWe have added the above discussions and experiments in Appendix E.4 in the revised paper. Thank you again!\"}", "{\"title\": \"Look forward to receiving your feedback on the author responses\", \"comment\": \"Dear Reviewer LMUm,\\n\\nI hope this email finds you well.\\n\\nWe really appreciate your helpful suggestions regarding the related works in multi-agent planning, the design principles, natural language descriptions, verifications of the proposed framework, and so on. We definitely believe that this submission has been further improved based on your suggestions! As the due of discussion phase is very close, we kindly ask if you could take a moment to review the responses and provide your feedback at your earliest convenience.\\n\\nThank you again for the time and effort you put into reviewing our paper!\"}", "{\"title\": \"Responses to your following questions (2/2)\", \"comment\": \"> I am still uncertain about the actual effort required for parameter fine-tuning and its practical implications. More experiments may be necessary to address this concern.\\n\\n**Responses**: Thank you for your comments. \\n- Note that the proposed framework **does not require any fine-tuning of effort parameters**. The only hyperparameter that needs adjustment is the threshold used by the reward model to define a sufficiently good plan.\\n- To further explore the effects of the threshold value, we conduct experiments with varying threshold values on the Husky-QA dataset. The experimental results are shown in the following table. These results indicate that the proposed framework has relatively low sensitivity to the hyperparameter, showing that within a reasonable range, the performance only experiences minor changes and remains superior to the baseline.\\n| | Accuracy (\\\\%) | \\n| ----- | ---- |\\n| GPT-4o | 33.3 | \\n| REACT | 37.6 | \\n| HUSKY | 39.6 | \\n| Ours (threshold = 0.875) | 43.7 | \\n| Ours (threshold = 0.750) | 43.1 |\\n| Ours (threshold = 0.625) | 42.1 |\\n\\nWe have added the above experiments and discussions in Appendix F.3 in the revised paper. Thank you again! \\n\\n\\n\\n> I think some previously studied methods (not just those listed above) may serve as alternatives to the reward model. Could additional ablation studies be conducted to demonstrate the advantages of the new reward model compared to existing ones?\\n\\n**Responses**: Thank you for your suggestions on the related studies.\\n\\nAfter reading the recommended papers, we noticed that the methods discussed in these papers might not be well-suited to our scenario. The reasons include (i) In agent-oriented planning for multi-agent systems, the reward model needs to make performance predictions based on the capabilities of the agent, which can vary significantly between different expert agents. (ii) The performance predictions in agent-oriented planning are made on a per-query basis, whereas most of the mentioned studies focus more on the overall effectiveness on an entire benchmark. (iii) Some mentioned studies rely on model family or model size, which might not be available to agents with closed-source LLMs. The proposed framework does not restrict whether the LLM used by an agent is open-source or closed-source.\\n\\nThese differences in application scenarios make it challenging to conduct a reasonable comparison between the proposed reward model and the recommended studies. Thank you again!\\n\\n\\n--- \\n\\nWe have uploaded a **revised paper that includes all the experiments and discussions in the above responses**, with the major modifications clearly highlighted. Thank you again for your reply and helpful suggestions!\"}", "{\"summary\": \"The paper describes a method for solving queries using a multi-agent system, with multiple agents which are experts in at different sub-tasks. A meta-agent is then required to decompose the queries into multiple sub-tasks that are then allocated to suitable agents for solving them. The authors identified three critical principles for task completion:\\n\\nThat is Solvability - Each sub-task should be solvable by at least one of the available agents. \\n\\nCompleteness - The set of sub-tasks should include all necessary information from the original query. That is the aggregation of the sub-tasks should have a comprehensive answer to the original query.\\n\\nNon-redundancy \\u2013 The set of sub-tasks should not include duplicate or unnecessary elements.\\n\\nFor each of these requirements, should it fail any, the meta-agent is required to revisit the task decomposition and/or task allocation.\\n\\nTo determine if a sub-task is completely resolved you would need to check if any of the agents can solve a sub-task, but due to the overhead required, they create a training dataset for a reward model using the fast decomposition process and a scorer, which evaluates agent response. This reward model is used to reduce the overhead of agent calls. It is used to approximate whether a sub-task is resolved by an agent by check if its score is sufficiently high (above a certain threshold). In cases where this score is not high enough for any agent, the sub-task is deemed to not fit the solvability criteria and the meta-agent replans.\\n\\nThey then explore when sub-tasks are lacking information/ambiguous or are too complex, and use a similarity calculation over the embeddings of the subtasks. If it is too high, there are tasks similar to it and it should be re-described according to the similar representative sub-task. If it does not meet a threshold then the sub-task is considered too complex and is modified.\\nA detector is added to check that all key elements of the original query exist in all the sub-tasks, and that no two sub-tasks resolve the same key elements. If either is failed a recommendation based on that key element is made to help remove overlapping subtasks or supplement missing details.\\nThis is evaluated vs a number of baselines and experiments are done based on a numerical reasoning dataset. It is shown to outperform the baselines.\\n\\nAblations are done to show that each component adds meaningfully to the model.\\n\\nThe related work is only listed after all the results.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The work shows three components which improve the ability of multi-agent LLM systems to improve the results of general LLM queries as compared to current methods.\\n\\nAll three components are shown to meaningfully add to the overall model, and directly tackle the presented critical principles for design.\", \"weaknesses\": \"The work is difficult to follow at times, I believe it can be made much clearer, especially with regards to Figure 2. It is difficult to understand this diagram without enough direct context. There is no clear direction that it takes. The detector does not seem to lead to a replan as described.\\n\\nThe ordering of the sections, particularly the related work, and how it reads in context at the end, does not make sense, was this just compiled in the wrong place accidentally?\\n\\nThe results only have one dataset to my understanding. Can this be extended to more datasets?\\n\\nI have concerns that the rewards model does not generalize outside of this dataset? Or would that simply require more training on bigger datasets?\\n\\nI believe this work is good and novel, however it could be improved upon with clearer writing.\", \"questions\": \"The results only have one dataset to my understanding. Can this be extended to more datasets?\\n\\nI have concerns that the rewards model does not generalize outside of this dataset? Or would that simply require more training on bigger datasets?\\n\\nThe ordering of the sections, particularly the related work, and how it reads in context at the end, does not make sense, was this just compiled in the wrong place accidentally?\\n\\nAre there cases where the scorers are human experts? It was a bit ambiguous as to whether this was in fact the case.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"Due to ambiguity of human experts with the scoring, I am unable to comment, but believe if that is checked and declared that there are no concerns beyond that.\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a new framework for multi-agent systems where meta-agents decompose user queries into sub-tasks, allocate these tasks to appropriate agents, and evaluate the results based on three key design principles: solvability, completeness, and non-redundancy. The meta-agent also uses a reward model to assess agent performance and includes a feedback loop for continuous improvement. The proposed framework demonstrates noticeable advancements in solving complex real-world problems by efficiently coordinating agents compared to existing methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper proposes an interesting framework for agent-oriented planning in multi-agent systems, including an integrated feedback loop and reward model, which appear to be novel.\\n\\nThis paper is well-structured, with a clear methodology and comprehensive experimental analysis.\\n\\nThis paper is clearly written and well-organized, making the complex concepts of agent-oriented planning accessible.\\n\\nThis paper addresses a crucial challenge in multi-agent systems\\u2014optimizing task decomposition and agent coordination, which has substantial implications for practical applications in artificial intelligence.\", \"weaknesses\": \"While this paper explored a very important problem of agent-oriented planning, I do have a few questions/concerns regarding the reported research work:\\n\\n1. The principles of solvability, completeness and non-redundancy appear to be very general in nature. They may not be entirely new to the field of multi-agent systems or task planning. The novelty may not lie in the principles themselves but in how they are implemented within the specific context of large language models (LLMs) and agent-oriented systems. While the paper integrates these principles into the implementation of a meta-agent, the principles alone may not represent a breakthrough. This paper could benefit from a more detailed discussion of how these principles are uniquely applied in the context of LLM based multi-agent systems. \\n\\n2. There seems to be a clear lack of technical details regarding how each agent is constructed, operates, and integrates with the meta-agent. While the paper describes agents such as the \\u201csearch agent,\\u201d \\u201cmath agent,\\u201d and \\u201ccommonsense agent,\\u201d I don't quite understand their underlying architectures and decision-making processes (e.g., the underlying models used for each agent and their training processes). Additionally, the descriptions of agent capabilities seem to be vague. This lack of technical depth makes it difficult to assess the robustness, scalability, and adaptability of the agents, as well as their performance in diverse real-world applications. Without these technical details, the framework\\u2019s effectiveness remains partially speculative, relying more on theoretical design principles than on concrete, replicable methods.\\n\\n3. The reward model developed in this paper operates like a surrogate to predict the performance of using any specific task agent to tackle any given sub-task. The general idea of predicting agents' performance is not new. The concrete design of the network architecture and the training algorithm for the reward model may be new. However, the corresponding technical contribution was not clearly highlighted and justified. The same concern also applies to the feedback loop, which is commonly used to update the surrogate models in the literature. Its novelty under the newly proposed framework may need to be better elucidated. Potentially, this paper may benefit by providing a more detailed comparison with existing surrogate models or a clearer explanation of how their approach differs from standard feedback loops in the literature.\\n\\n4. This paper introduces multiple algorithm/system parameters, such as reward model thresholds, agent selection criteria, and similarity measures, which may be challenging to fine-tune. This complexity makes it potentially difficult to adapt the proposed multi-agent system to new datasets or applications, as optimal tuning may require deep domain-specific knowledge and extensive experimentation, further affecting the system\\u2019s usability. In line with this concern, it might be helpful for the authors to provide some concrete guidelines or heuristics for parameter tuning, or to discuss potential strategies for automating or simplifying this process for new datasets or applications.\\n\\n5. The experimental evaluation in the paper appears to be limited in scope, as it primarily focuses on a single numerical reasoning dataset. It remains questionable whether the newly developed multi-agent system can generalize well across diverse real-world tasks or more complex, domain-specific applications. Hence, the authors may need to discuss potential challenges in applying their system to more diverse or complex tasks.\", \"questions\": \"Could you elaborate on how the principles of solvability, completeness, and non-redundancy are uniquely adapted for LLM-based multi-agent systems in this framework?\\n\\nCould you clarify the unique aspects of your reward model and feedback loop in this framework? How do they differ from traditional surrogate models, and what specific contributions do they offer to agent-oriented planning in LLM-based systems?\\n\\nWhat strategies or guidelines do you suggest for fine-tuning the system\\u2019s parameters for new datasets or applications? Are there ways to simplify or automate this tuning process to improve usability?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethical concerns.\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you very much for your reply\", \"comment\": \"To Reviewer AMcZ:\\n\\nThank you very much for your reply! We are working hard to prepare responses to your following related concerns and suggestions. Due to resource constraints, we will provide these responses ASAP.\"}", "{\"title\": \"Thank you for your reply\", \"comment\": \"We sincerely appreciate your reply and your decision to raise the score! Your positive feedback is invaluable to us, and we are pleased that our responses have addressed your concerns.\\n\\nThank you again for your time and effort in reviewing our paper!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"> W3.1: The concrete design of the network architecture and the training algorithm for the reward model may be new. However, the corresponding technical contribution was not clearly highlighted and justified.\\n\\n**Responses**: Thank you for your comments on the reward model.\\n- Different from existing studies that use a reward model to evaluate generated responses, the proposed framework utilizes the reward model to predict the quality of agents' responses to tasks for assessing whether an agent is suited for a task, providing an efficient evaluation method that does not need the execution of tasks.\\n- To achieve this, the construction of the reward model's training data differs from that of previous studies. Specifically, we gather sub-tasks and agent descriptions as inputs and use the scores of generated responses as outputs to form the training data. The training objectives and detailed training process are elaborated in Section 4.2 of the paper.\\n- We have conducted experiments to measure the contributions of the reward model (RM). The results are presented in Table 2 and Table 3 of the paper, and summarized as follows. From the table we can observe that, the proposed framework experiences a 5\\\\% performance drop when the reward model is removed. Besides, an RM trained with manual scoring and comprehensive parameter tuning can further lead to significant improvements (3.2\\\\% and 4.2\\\\%, respectively). These results demonstrate the importance and contributions of the reward model.\\n| | Accuracy |\\n| ----- | ---- |\\n| Ours | 43.7\\\\% |\\n| w/o RM | 38.7\\\\% (-5.0\\\\%) |\\n| w/ RM trained based manual scoring | 46.9\\\\% (+3.2\\\\%) |\\n| w/ full parameter tuning RM | 46.9\\\\% (+4.2\\\\%) |\\n\\nWe have incorporated the above discussions on the reward model and made them more clear in the revised paper. Hope these responses can address your concerns about the reward model. Thank you again!\\n\\n\\n\\n> W3.2: Potentially, this paper may benefit by providing a more detailed comparison with existing surrogate models or a clearer explanation of how their approach differs from standard feedback loops in the literature.\\n\\n**Responses**: Thank you for your suggestions on the feedback loops.\\n- To the best of our knowledge, there is currently no existing feedback loop that can be directly applied to multi-agent systems for agent-oriented planning.\\n- We conduct an additional ablation study to measure the contribution of the feedback loop. The experimental results indicate that the proposed framework experiences approximately a 1\\\\% performance decrease when the feedback loop is disabled. \\n\\nWe have incorporated the above discussions and experiments in Appendix E.8 of the revised paper. Thank you again.\\n\\n\\n> W4 \\\\& Q3: it might be helpful for the authors to provide some concrete guidelines or heuristics for parameter tuning, or to discuss potential strategies for automating or simplifying this process for new datasets or applications.\\n\\n**Responses**: Thank you for your comments on the hyperparameter tuning.\\n- In this study, the only hyperparameter that needs tuning is the threshold used for the reward model to define a sufficiently good plan (i.e., the subtask and the corresponding assigned agent). This score can be provided by a human-expert scorer or an LLM-based scorer, combining aspects such as correctness, relevance, and completeness. These scores can be flexibly set up according to downstream tasks, and, for convenience, normalization can be applied.\\n- The setting of the threshold also depends on the capability of the LLM, particularly its instruction-following ability. A powerful LLM might allow for more error tolerance, meaning that it can still provide satisfactory responses even if the plan might not get a rather high score. When the expert agent's capabilities are not as powerful, the thresholds need to be set higher (for a good enough plan), which may lead to more iterations needed to refine plans.\\n\\nWe have added the above discussions on the hyperparameter tuning in Appendix F.3 of the revised paper. Thank you again for your comments!\"}", "{\"summary\": \"This paper introduces a framework for agent-oriented planning in multi-agent systems, where a central \\\"meta-agent\\\" decomposes complex user queries into sub-tasks, assigns these sub-tasks to specific agents, and evaluates their performance. The authors focus on three key principles, namely, solvability, completeness, and non-redundancy, to guide efficient task decomposition and assignment. The framework also includes a feedback loop, enabling the meta-agent to refine its planning over time.\\n\\nThe framework\\u2019s emphasis on structured decomposition and task dependencies is well-suited for complex queries that involve multiple agents with diverse capabilities. By ensuring each sub-task aligns with an agent\\u2019s expertise, the system promotes both efficiency and reliability.\", \"i_have_a_few_questions\": [\"Task dependencies are identified during decomposition, but dependencies may also emerge during execution. How does the framework adapt when unforeseen dependencies between sub-tasks arise? Could this affect the overall solvability or efficiency of the task set?\", \"The framework depends on structured task decomposition, yet real-world environments are often unpredictable and dynamic. How does this approach handle unexpected changes in agent availability or evolving task requirements without needing a complete re-plan?\", \"Non-redundancy is a key focus, yet redundancy can be valuable for fault tolerance. How does the framework balance efficiency with the potential need for redundancy, particularly in scenarios where backup solutions could be essential?\", \"The scalability of the approach isn\\u2019t clear to me. As the number of agents and interdependent tasks increase, how does the meta-agent manage complexity? Are there limitations on the scale or number of agents?\", \"The meta-agent requires detailed knowledge of each agent's abilities for effective task allocation. But it isn\\u2019t immediately clear to me, how are these capabilities/descriptions in D represented? Can you provide more details?\", \"Related to the comment above, can you discuss what representation is best suited for agent descriptions? Are these representations task dependent?\", \"Another follow-up on the above comment, can you discuss about the process/requirements if these representations need to be verified or even updated through the process? How can the approach be adapted to handle such cases?\", \"While the feedback loop allows for ongoing improvements, how does it prevent instability or oscillations in planning strategies? How would you introduce safeguards into the system or meta-agent to avoid fluctuating between different planning approaches?\", \"Overall, I'm happy with the contributions made in this paper and vote for a weak accept. I'd be happy to raise my score given mine and my fellow reviewers' comments are adequately addressed. Thanks!\"], \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"-- See above\", \"weaknesses\": \"-- See above\", \"questions\": \"-- See above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper concerns solving problems in a multi-agent context, with multiple agents that have different expertise. The authors propose to use an LLM meta agent that decomposes the queries into sub-tasks and allocates the subtasks to the agents.\\n\\nThe reviewers agree that the paper has merit, in particular, appreciating the three meaningful and structured components for task completion.\\n\\nMost of the weaknesses evolve around the writing, the related work on multi-agent planning, and how the method generalizes. Most of these concerns were mitigated during the discussion phase, and we recommend acceptance of this paper.\", \"additional_comments_on_reviewer_discussion\": \"The discussion was very lively and the authors and reviewers enaged well, improving the paper significantly.\"}", "{\"title\": \"Thank you very much for your reply!\", \"comment\": \"Thank you very much for your appreciation of the above responses! For the additional questions, we provide the following responses:\\n> Could you please clarify how to refine the plan (e.g., changing the prompt?) based on a single numerical reward value?\\n\\nThank you for your comments. Overall, we need to automatically specify (the most possible) issues of the plan, with the help of the feedback from the reward model and the proposed representative work mechanism, and then instruct the meta-agent to take the appropriate actions to refine the plan. \\n\\nTo be more specific, the reward model provides scores to predict the quality of responses when a decomposed subtask is allocated to agents. Based on these scores, the meta-agent would take the following strategies to refine the plan if necessary:\\n- One scenario is when the subtask (generated in the fast decomposition and allocation process) is deemed a failure (i.e., all scores related to this subtask fall below a threshold). In this case, we would instruct the meta-agent to revise and {\\\\it replan} for this subtask (please refer to Appendix A.2 for the adopted prompts).\\n- Another scenario is when allocating the subtask to a particular agent is predicted to be suitable. In this case, we need to determine if the subtask still requires further enhancement, with the help of the representative works of this agent. For example, we might need to supplement details lost during the decomposition ({\\\\it re-describe}, please refer Appendix A.4 for the adopted prompts) or to further decompose the sub-task into simple ones ({\\\\it plan-in-detail}, please refer Appendix A.3 for the adopted prompts). \\n\\nFor more detailed and formatted introductions of the above process, please refer to Sections 4.1 and 4.2 in the submission. Thank you again!\\n\\n> Regarding the reward model, if there are some outliers (bad plans) with high rewards, is there a post-processing technique to detect and resolve such outliers?\\n\\nThank you for your comments on the outliers.\\n\\nIn the proposed framework, when an outlier is assigned high rewards, it can still be identified and then refined by the meta-agent through the proposed representative work mechanism. Moreover, the representative works of agents would be continuously enhanced through a feedback loop, improving the description of the agent's task-specific capabilities to address the potential outliers that the reward model may not have handled well.\\n\\n--- \\n\\nThank you again for your reply and helpful suggestions! Please let us know if these responses meet your expectations. We are eager to engage in further discussions and continue improving our work.\"}", "{\"title\": \"Thank the authors for providing further information and experiment results\", \"comment\": \"Thank the authors for providing further information and experiment results. I think all my major concerns have been addressed. I will raise my score accordingly.\"}" ] }
EqCbc4wrzy
MDPE: A Multimodal Deception Dataset with Personality and Emotional Characteristics
[ "Cong Cai", "Shan Liang", "Xuefei Liu", "Kang Zhu", "Zhengqi Wen", "Jianhua Tao", "Heng Xie", "Jizhou Cui", "Yiming Ma", "Zhenhua Cheng", "Hanzhe Xu", "Ruibo Fu", "Bin Liu", "Yongwei Li" ]
Deception detection has garnered increasing attention in recent years due to the significant growth of digital media and heightened ethical and security concerns. It has been extensively studied using multimodal methods, including video, audio, and text. In addition, individual differences in deception production and detection are believed to play a crucial role.Although some studies have utilized individual information such as personality traits to enhance the performance of deception detection, current systems remain limited, partly due to a lack of sufficient datasets for evaluating performance. To address this issue, we introduce a multimodal deception dataset MDPE. Besides deception features, this dataset also includes individual differences information in personality and emotional expression characteristics. It can explore the impact of individual differences on deception behavior. It comprises over 104 hours of deception and emotional videos from 193 subjects. Furthermore, we conducted numerous experiments to provide valuable insights for future deception detection research. MDPE not only supports deception detection, but also provides conditions for tasks such as personality recognition and emotion recognition, and can even study the relationships between them. We believe that MDPE will become a valuable resource for promoting research in the field of affective computing.
[ "deception detection; affective computing; multimodal dataset" ]
Reject
https://openreview.net/pdf?id=EqCbc4wrzy
https://openreview.net/forum?id=EqCbc4wrzy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yAguIMLoih", "u6d0GEFymF", "j65DeK9YfG", "ifFH50gnzk", "gwUmZxLCe5", "g0wKc6pCNy", "dmn4sPmXbx", "a6KxU0FZvp", "UgB5da07Kb", "OUwdhhkcWZ", "86nhEAtA7K", "7gFAMHRJi1", "64pbihGPXh" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "meta_review", "official_comment", "official_review" ], "note_created": [ 1737523926274, 1731923677246, 1731987202044, 1731987176246, 1732272915211, 1730490042075, 1732371821635, 1731923702797, 1730061906466, 1730417957450, 1734463970257, 1731918315947, 1730718768910 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8693/Authors" ], [ "ICLR.cc/2025/Conference/Submission8693/Authors" ], [ "ICLR.cc/2025/Conference/Submission8693/Authors" ], [ "ICLR.cc/2025/Conference/Submission8693/Reviewer_QeLm" ], [ "ICLR.cc/2025/Conference/Submission8693/Reviewer_bdmX" ], [ "ICLR.cc/2025/Conference/Submission8693/Reviewer_bdmX" ], [ "ICLR.cc/2025/Conference/Submission8693/Authors" ], [ "ICLR.cc/2025/Conference/Submission8693/Reviewer_Jk8p" ], [ "ICLR.cc/2025/Conference/Submission8693/Reviewer_ZBrA" ], [ "ICLR.cc/2025/Conference/Submission8693/Area_Chair_bXw5" ], [ "ICLR.cc/2025/Conference/Submission8693/Authors" ], [ "ICLR.cc/2025/Conference/Submission8693/Reviewer_QeLm" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you very much for your guidance and valuable suggestions. We are very grateful for your comments, which are very helpful for us to sort out the structure and motivation of the article.\\n\\ufeff\\n1. The concept of 'true emotional expression' pertains to the congruence between an individual's outward emotional display and their internal affective state. Our emotion induction procedure establishes a baseline for emotional responses, which is crucial for comparison with the emotional responses observed during the deception experiment. Experimental results have indicated that emotional features significantly contribute to fraud detection tasks. \\n2. In this study, we present a comprehensive multimodal deception detection dataset, encompassing visual, audio, and textual data, as well as personality and emotional features. This dataset, being the largest of its kind, offers a unique opportunity to investigate how personality and emotional traits can enhance deception detection. Our work not only lays the foundation for future research in emotion computing but also suggests potential methods to improve emotion recognition by integrating personality insights and enhancing personality recognition performance with emotional cues. Through extensive experiments, we have preliminarily demonstrated the value of integrating personality and emotional traits in deception detection tasks. While our application of machine learning methods may not have achieved the desired level of advancement, we believe that deception detection is inherently challenging, with various machine learning methods showing limited performance. This suggests a need for the research community to further explore the features and methods within this dataset. Thus, we propose a simple machine learning method as a baseline for our analysis. In the revised manuscript, we have conducted a more in-depth analysis of the impact of machine learning on new datasets, including remaining challenges and insights into the limitations of current architectures, and how this dataset can contribute to advancing research in this field.\\n3. In the updated manuscript, we have included information on the public availability of existing datasets. \\n4. Building upon the six emotions covered by the Chinese Emotional Video System (CEVS)\\u2014happiness, sadness, anger, fear, disgust, and neutrality\\u2014we have added relaxation and surprise to our dataset. Relaxation serves as an emotional baseline, allowing for the comparison of other emotional states. Deviations from a relaxed state during specific questions or situations may indicate deception, as inconsistent relaxation levels could suggest an attempt to deceive. Surprise, a spontaneous and difficult-to-fake emotion, can indicate truthfulness or deception; a genuine surprise response to an unexpected question or accusation may suggest unpreparedness and honesty, while a lack of surprise may indicate a prepared deceptive response. \\n5. Our experiments were conducted at least five times with randomly selected samples, and the average results were taken to ensure statistical significance.\\n6. Regarding ethical issues, we ensured that all experimental procedures were explained to the subjects, who provided explicit consent for the recording and publication of their conversation and video data in scientific conferences or journals. Our data collection and dissemination adhere to the principle of informed consent and comply with relevant laws, regulations, and ethical review requirements, all approved by our institution's Human Subjects Institutional Review Board. We have implemented privacy protection measures, did not publish any personally identifiable information, and restricted dataset access to users who agree to use it solely for scientific research. The release of our deception dataset aligns with international standards and industry best practices, positively impacting scientific research, technological progress, and public safety. We have added a new section in the revised manuscript to address these ethical considerations, ensuring a comprehensive and responsible approach to handling the deception detection dataset.\\n7. We have thoroughly revised the manuscript to improve the quality of writing and address all language issues, including specific errors such as \\\"role.Although,\\\" \\\"new multi-modal deception dataset,\\\" and reference formatting, as pointed out by the reviewers.\"}", "{\"comment\": \"Thank you very much for your guidance and valuable suggestions. We are very grateful for your comments, which are very helpful for us to sort out the structure and motivation of the article.\\n\\n1. In the revised manuscript, we have condensed the \\\"Introduction\\\" section to enhance conciseness and shifted detailed background information, particularly from the second and third paragraphs, to the \\\"Related Work\\\" section. This reorganization improves the flow and maintains focus within the introduction. \\n2. We have clarified that the \\\"effective incentives\\\" mentioned (line 76) are monetary in nature and have provided details on their allocation (line 224) to effectively motivate participants in deceptive behavior. Additionally, we have meticulously revised the manuscript to address all linguistic inconsistencies and spelling errors, including those in Table 1, ensuring the accuracy and integrity of the presented data.\\n3. At the commencement of section 3.3, we have included a statement delineating the sequence of data collection activities. Participants initially complete the Big Five personality questionnaire to ascertain personality traits. Subsequently, half of the participants engage in emotional experiments while the other half partake in deception detection experiments. This balanced approach to experimental sequencing is intended to mitigate any potential influence of the sequence on the outcomes.\\n4. We have corrected the abbreviation \\\"DDC\\\" to \\\"DCC\\\" and offered a clear definition to prevent any ambiguity. \\n5. Furthermore, we have expanded upon the roles of the interviewers, detailing their background and responsibilities throughout the data collection process, thereby underscoring their pivotal role in our research.\\n6. To ensure participant anonymity, we have provided explicit details regarding the measures taken to remove or alter identifying information. The experimental videos provided for analysis exclude any segments containing personal details such as names, contact information, place of residence, education, occupation, and so forth.\\n7. The statement regarding CEVS videos being \\\"outdated\\\" and unable to effectively induce corresponding emotions refers to content that is no longer relevant or relatable to contemporary viewers. For instance, some videos intended to elicit happiness contained jokes that required historical context unfamiliar to most participants. As a result, these videos were replaced.\\n8. The 22 online videos were selected and annotated by 12 data annotators based on the CEVS criteria and evaluation methods (line 205). For further details, please refer to [1].\\n9. In section 4.4, we have revised the second paragraph to elucidate the structure of the specialized emotion recognition models and the specific emotional cues they utilize. We have also detailed the role of personality and emotional traits in fraud detection tasks.\\n10. The 24 interview questions were meticulously designed for this study (line 215), with input from five psychology researchers drawing primarily from the Fraud Triangle Theory and Rational Choice Theory. The questions were crafted to integrate knowledge from psychology, criminology, and sociology, aiming to comprehensively capture various facets of fraudulent behavior. The Delphi method was employed to gather and synthesize expert opinions, and the questions were refined through focus group discussions. Some questions were designed to reflect current trends in fraudulent behavior, capturing aspects that traditional methods might overlook. After an initial design phase, we conducted pre-experiments, collected feedback from participants, and revised the interview questions accordingly, resulting in the final set being the third iteration.\\n11. We have thoroughly revised the manuscript to enhance writing quality and address all language issues, including uncertainties and unclear descriptions as highlighted by your review.\\n\\ufeff\\n We hope that these revisions address your concerns and strengthen the contribution of our paper. We are grateful for the opportunity to improve our work based on your valuable feedback. Thank you again for your comments and acceptance of our views.\"}", "{\"comment\": \"Thank you very much for your guidance and valuable suggestions. We are very grateful for your comments, which are very helpful for us to sort out the structure and motivation of the article.\\n\\ufeff\\n1. We acknowledge that our dataset, which includes only Chinese speakers, may limit its generalizability to other cultures and languages. To adapt the MDPE dataset for future research in different linguistic and cultural contexts, Future studies could involve collecting similar datasets from diverse cultural backgrounds, allowing for comparative analyses that explore how cultural factors influence deception detection. In fact, we have started collecting and processing an extended version of MDPE data, which will greatly alleviate bias issues caused by population and cultural differences.\\n\\ufeff\\n2. In fact, we have a description of population statistics (section 3.2). In the updated manuscript, we provide a more detailed introduction to the detailed demographic distribution of participants, taking into account factors such as age, gender, and other relevant characteristics, in order to have a clearer understanding of the composition of the dataset. \\n\\ufeff\\n3. We recognize the need for a deeper analysis of how emotional characteristics influence deception. In the revised manuscript, we will expand our discussion to include: A thorough examination of the emotional traits present in the dataset and their potential impact on deception detection. And emotion recognition is presented with multi-labels.\\n\\ufeff\\n4. Regarding the relevance of personality and emotion in deception detection across different demographics, we believe that while the underlying psychological constructs may remain relevant, their expression and interpretation can vary across cultures. We will address this point in the updated manuscript, emphasizing the importance of cultural considerations in future research.\\n\\ufeff\\n5. We appreciate your suggestion to explore additional multimodal modules for future research. In the revised manuscript, we will include a discussion of potential multimodal approaches that could enhance our understanding of deception detection.\\n\\ufeff\\n6. While the primary focus of this study is deception detection, we acknowledge the creativity in the proposed comparison with other emotion recognition or personality detection models. We believe such comparisons could serve to validate the performance of personality and emotional traits in our context. Consequently, we will include a comparative analysis section in the new manuscript, assessing the performance of our approach against existing models.\\n\\ufeff\\n\\ufeff\\nWe hope that these revisions address your concerns and strengthen the contribution of our paper. We are grateful for the opportunity to improve our work based on your valuable feedback. Thank you again for your comments and acceptance of our views.\"}", "{\"title\": \"Response\", \"comment\": \"(1) If I understand correctly, the video emotion induction procedure is supposed to elicit emotional responses for which outward emotional display and internal affective state are congruent? Was this assumption checked / validated in some way?\\n(4) Why is \\\"relaxation\\\" used as an \\\"emotional baseline\\\" when also \\\"neutrality\\\" is included in the set of emotions?\\n(5) Simply performing the experiment several times and averaging does not imply anything about statistical significance. Instead a statistical test needs to be performed and clearly described (goal, assumptions, following common reporting standards).\\n(8) I do not agree with the claim of ecological validity of video-based emotion induction, in particular when generalising to conversations. The at best marginal improvements for the model incorporating emotion information also raise doubts.\\n\\nOverall, I still think there are conceptual and methodological issues that speak against acceptance.\"}", "{\"summary\": \"In this study, the authors collected data by conducting extensive interviews to create a multimodal deception dataset featuring 193 participants. This dataset also includes information about the participants' personalities and emotional states. Furthermore, the authors explored how features from different modalities and labels impact and enhance the deception detection task.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The authors conducted the data collection with great effort and provided comprehensive coverage regarding the data collection and label design.\", \"weaknesses\": \"1. The 'Introduction' section is somewhat verbose. While the authors did an excellent job of providing detailed background information on deception with examples from various modalities, I believe that some of this content, particularly the details in the second and third paragraphs, would be better suited for the 'Related Work' section.\\n\\n2. The authors mention \\\"effective incentives\\\" without clarifying what these entail (whether monetary or non-monetary) or how they were distributed. Additionally, there is a typo in Table 1.\\n\\n3. I suggest that the authors include a statement at the beginning of section 3.3 to clarify the order of the data collection activities. \\n\\n4. The abbreviation \\\"DDC\\\" at the end of page 4 is unclear\\u2014are you referring to \\\"DCC\\\" instead?\\n\\n5. The role of the 'interviewer' is crucial in this data collection process, as they are responsible not only for asking questions but also for being deceived and labeling responses. However, the manuscript lacks details about the interviewer's background and responsibilities.\\n\\n6. More specific information is needed regarding how the anonymity of participants is ensured, including what information has been removed or altered. \\n\\n7. The statement, \\\"some videos of the CEVS are outdated and cannot successfully induce corresponding emotions in our pre-experiments,\\\" is also unclear. What does \\\"outdated\\\" mean in this context, and why exactly are these videos unable to evoke specific emotions?\\n\\n8. Additionally, what is the source of the 22 online collected videos, and how did the 12 annotators reach a consensus on the labels?\\n\\n9. In section 4.4 (feature fusion), the second paragraph is poorly explained. After reading it, I'm still unclear about the structure of the \\\"dedicated emotion recognition model\\\" and what specific emotional cues it utilizes. Descriptions such as \\\"We feed a comprehensive set of emotional expression features into this model, allowing it to learn and adapt to the nuances of emotional communication\\\" are too abstract.\\n\\n10. Similarly, in the results presented in Table 2 and in the discussions regarding personality and emotional features on pages 7, 8, and 9, there is a lack of detail about what these features are and how they contribute.\\n\\n11. I would also like more information about the 24 interview questions (as detailed in Appendix A.3) that were carefully designed by experienced psychology researchers. Are there any references or theories they used to create these questions?\\n\\nOverall, the writing can be significantly improved, as there are many uncertainties and unclear descriptions throughout the manuscript. For example: \\\"Before the interview, details of the emotion scale can be found in Appendix C,\\\" and \\\"Some studies confirm that some of the five NEO-FFI (Neuroticism Extraversion-Openness Five-Factor Inventory) dimensions are related to deception.\\\" \\n\\nIn conclusion, it is evident that the work feels rushed, and substantial revisions are necessary to address the missing details. As it stands, it cannot be accepted in its current form.\", \"questions\": \"Please see the questions raised under the 'Weaknesses' section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to authors by reviewer bdmX\", \"comment\": \"Dear Authors,\\n\\nThank you for considering the concerns I have raised. Have you uploaded the revised version yet? I checked the uploaded PDF file, which still contains all the previous issues.\\n\\nIf a revised version is available, please upload it.\"}", "{\"comment\": \"8. The use of videos to induce emotions provides a controlled environment for eliciting specific emotional states, which is essential for scientific research. Although the social context of deception differs from watching a video, the emotional responses elicited by videos are ecologically valid, often depicting social interactions and emotional scenarios that mimic real-life situations. Videos can evoke a wide range of emotions, from basic to complex, which is important for studying how different emotions influence deceptive behavior and for training models to recognize a broad spectrum of emotional cues. The video-based emotion induction procedure provides a baseline for emotional responses that can be compared with those during the deception experiment, helping to identify discrepancies that might indicate deception, as individuals who are lying may struggle to regulate their emotions in a social context.\\n\\ufeff\\nWe hope that these revisions address your concerns and strengthen the contribution of our paper. We are grateful for the opportunity to improve our work based on your valuable feedback. Thank you again for your comments and acceptance of our views.\"}", "{\"summary\": \"The manuscript introduces the MDPE dataset, a multimodal resource for contributing to deception detection research. This dataset is organized by more than 104 hours of video, audio, and text recordings from 193 subjects, along with annotations on personality traits and emotional expressions. The authors argued that previous deception detection datasets were limited in scope, often lacking the inclusion of individual characteristics that could enhance deception prediction. The authors also demonstrated the effectiveness of MDPE via a series of experiments, indicating that multimodal and personality-aimed models perform better than unimodal models. The manuscript concludes by emphasizing the potentiality of the dataset for future research with consideration of three key tasks, deception detection, personality recognition, and emotion recognition.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"S1: The introduced dataset can be one of the good contributions for deception research, providing diverse modalities with consideration of personality and emotion annotations.\", \"s2\": \"The dataset's size is valuable, compared to other datasets. The authors provide the comprehensive dataset of 193 subjects with more than 104 hours. It represents a variety of demographics, covering a broad range of emotions and incorporating deception and genuine responses.\\nS3. The authors perform extensive benchmarking with multiple features (e.g., visual, acoustic, textual) and combine them with personality and emotional data, leading to valuable findings about multimodal fusion effectiveness.\", \"weaknesses\": \"I think that the first weakness of the manuscript is that the collected dataset includes only Chinese speakers, which are limited in its generalization. As the authors know, how do we apply this approach to future research considering other cultures and languages? Obviously, their selected text module (with Chinese-oriented LLM) is effective for their collected dataset. However, it can be applied to other datasets?\\n\\nSecond, although the authors indicate their approaches, I cannot find any demographic statistics in their collected dataset. Is it well-distributed with consideration of age, gender, and a number of characteristics? \\n\\nThird, the authors mentioned that emotional characteristics are included in the dataset, but the manuscript lacks a deep analysis of how these characteristics influence deception, instead focusing more on personality traits. This leads to underutilizing the dataset's emotional potential. Moreover, the authors are required to present whether the emotion recognitions are presented with multi-labels or not.\", \"questions\": \"1. How do we adapt MDPE to other languages and cultures? Would personality and emotion still be as relevant in deception detection across different demographics?\\n\\n2. What kinds of multimodal modules can we consider for future research? As presented in the manuscript, the authors considered several previously presented modules, but there can be additional selections for each module.\\n\\n3. Although the authors consider multi-task approaches for this topic, I think that it is required to compare the results with other emotion recognition or personality-detection models. How do the authors think?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors have introduced a multimodal deception dataset. The main novelty of this dataset is that it includes not only deception features but also individual differences in personality and emotional expression characteristics.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The topic of deception detection is a challenging and important one.\", \"weaknesses\": \"My main concerns with this paper are about the novelty, the content and the presentation style. Regarding the content, the contribution of this paper seems quite limited for a top conference on machine learning (alternative and potentially more adequate conferences could be ICMI. CSCW, ACM-MM, etc.). Regarding the presentation, there are several typos and some sections of methods (Section 4.2 for example) are very badly organized and written in a not adequate manner.\", \"questions\": \"The 24 questions used by the interviewer with a given subject were formulated specifically for this study?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper presents a new multimodal deception detection dataset along with personality and emotional characteristics. The dataset is comprehensive and can be useful to the research community, if open sourced.\\n\\nHowever, the paper suffers from major methodological and conceptual issues, as pointed out by the reviewers. Unfortunately, the authors have not addressed these issues adequately and neither have they uploaded a revised version of the manuscript.\", \"additional_comments_on_reviewer_discussion\": \"The paper suffers from major methodological and conceptual issues, as pointed out by the reviewers. Unfortunately, the authors have not addressed these issues adequately and neither have they uploaded a revised version of the manuscript.\"}", "{\"comment\": \"Thank you very much for your guidance and valuable suggestions. We are very grateful for your comments, which are very helpful for us to sort out the structure and motivation of the article.\\n\\n\\ufeff\\nIn this study, we introduce a comprehensive multimodal deception detection dataset that encompasses visual, audio, and textual data, along with personality and emotional characteristics. This dataset is the largest of its kind and offers a unique opportunity to explore how personality and emotional traits can augment deception detection. Our work not only lays the groundwork for future research in affective computing but also suggests potential avenues for improving emotion recognition through the integration of personality insights and enhancing personality recognition performance with emotional cues. We have conducted extensive experiments that preliminarily demonstrate the value of incorporating personality and emotional traits in deception detection tasks. While we acknowledge that our application of machine learning methods may not be as advanced as desired, we believe that deception detection is inherently challenging, and the performance of various machine learning approaches has been modest. This indicates a need for further in-depth feature exploration within this dataset by the research community. Consequently, we have chosen to present a straightforward machine learning approach as a baseline for our analysis.\\n\\n\\ufeff\\nWe apologize for the typos and organizational issues in the previous version of the manuscript. We have thoroughly proofread the paper and corrected all identified typos. Additionally, we have restructured Section 4.2 to improve clarity and readability. We believe these changes will make our methods more accessible and better understood by the readers.\\n\\ufeff\\n\\nThe 24 questions used by the interviewer for specific topics were specifically designed for this study (line 215), 5 psychology researchers mainly referred to the Fraud Triangle Theory and Rational Choice Theory. The question design integrated knowledge from psychology, criminology, and sociology to comprehensively capture multiple aspects of fraudulent behavior. We used the Delphi method to collect and integrate expert opinions, and refined the issues through focus group discussions. The design of some issues reflects consideration of the new trends in current fraudulent behavior, aiming to capture aspects that traditional methods may overlook. In fact, after designing the initial interview questions, we conducted pre experiments, collected feedback from interviewees, and revised the interview questions after further discussion, the final interview questions were actually the third version.\\n\\ufeff\\n\\nWe appreciate your suggestion of alternative conferences such as ICMI, CSCW, and ACM-MM. While we believe our work is still relevant for this conference, we will consider your suggestion for future submissions if our work is not accepted here.\\nWe hope that these revisions address your concerns and strengthen the contribution of our paper. We are grateful for the opportunity to improve our work based on your valuable feedback. Thank you again for your comments and acceptance of our views.\"}", "{\"summary\": \"The paper presents a new dataset for deception detection and emotion recognition.\\nThe authors evaluate several approaches/features on the task of deception detection, also including emotion information.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The new dataset can be valuable to the community if made publicly available.\\nThe authors present extensive evaluations of existing methods/features on the dataset.\", \"weaknesses\": \"The concept of emotion and emotion expression used in the paper is fuzzy.\\nE.g. what is a \\\"true emotional expression\\\" (line 074) supposed to refer to? One possibility that I might suspect is that the authors refer to whether the emotional expression is aligned with an internal state. This is a highly complex topic as emotions are not displayed directly on e.g. a person's face but are subject to regulation processes and social display rules (e.g. see Schneeberger et al., 2023; M\\u00fcller et al., 2024).\\n\\n\\nThe method novelty is limited, as the authors only evaluate simple combinations of existing approaches. This is not a very significant issue, as the paper presents itself as a dataset paper, where method novelty is not necessary imo. However, in my understanding it would then be important for an ICLR paper to have a more in-depth analysis of what are the implications of the new dataset for deep learning. We should be able to learn something with it. E.g., what are the remaining challenges, does it tell us something about what the current architectures cannot cover?\\nOn what kinds of examples do the current methods fail?\\n\\n\\nReferences are missing for the statement \\\"Most studies on deception detection are designed and evaluated on private datasets, typically with relatively small sample sizes\\\" (line 104).\\nIt would also be helpful to add information on the public availability of the existing datasets in Table 1.\\n\\nIn the description of the emotional experiment (lines 198-207), authors mention that they extended some existing dataset of emotional videos because it only has 6 emotions. In the end there seem to be 8. Which emotions were added and why were they considered to be relevant in the context of deception detection?\\n\\nConcerning the results - the gains by including personality or emotion into the models are only marginal. Are these gain statistically significant when accounting for randomness (of the ssample/datasets, and of the methods/random number generator intialization)?\\n\\nResearch on deception detection raises a host of ethical issues which are not discussed in the paper.\\n\\nAll in all, the strength of the contribution is not too convincing to me and there are a number of open questions on the motivation/scientific background. Furthermore, the quality of the writing is not good enough at the moment, and the improvements for personality/emotion integration are only marginal.\", \"misc\": [\"-----\", \"line 015: \\\"role.Although\\\"\", \"line 093: \\\"introduced a new multi-modal deception dataset\\\" - the characterisation \\\"new\\\" does not make sense here, as the dataset is already quite old (2015) and of course everything was new when it was introduced.\", \"line 102: wrongly formatted reference (\\\"Speth Jeremy et al.Speth et al. (2021)\\\"), this occurs several times, e.g. in 126,129 and at many other places.\", \"line 110: \\\"Subjeet\\\", missing space in \\\"Length(Minutes)\\\"\", \"line 133: non-grammatical paragraph heading \\\"Individual Difference Deception\\\"\", \"there are more formal/languageissues throughout the paper. the ones above are just some examples.\", \"line 376: \\\"egemas\\\"\"], \"references\": \"-----------\\n\\nSchneeberger, Tanja, et al. \\\"The deep method: Towards computational modeling of the social emotion shame driven by theory, introspection, and social signals.\\\" IEEE Transactions on Affective Computing (2023).\\n\\nM\\u00fcller, Philipp, et al. \\\"Recognizing Emotion Regulation Strategies from Human Behavior with Large Language Models.\\\" arXiv preprint arXiv:2408.04420 (2024).\", \"questions\": \"What was the reasoning behind using a movie-based emotion induction procedure in the emotion experiment? Why is this procedure relevant in relation to the deception experiment where emotions are induced in a social situation?\", \"flag_for_ethics_review\": \"['Yes, Potentially harmful insights, methodologies and applications']\", \"details_of_ethics_concerns\": \"Deception detection might be used to make people more transparent despite their will.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
EpnZEzYDUT
Efficient Multi-agent Offline Coordination via Diffusion-based Trajectory Stitching
[ "Lei Yuan", "Yuqi Bian", "Lihe Li", "Ziqian Zhang", "Cong Guan", "Yang Yu" ]
Learning from offline data without interacting with the environment is a promising way to fully leverage the intelligent decision-making capabilities of multi-agent reinforcement learning (MARL). Previous approaches have primarily focused on developing learning techniques, such as conservative methods tailored to MARL using limited offline data. However, these methods often overlook the temporal relationships across different timesteps and spatial relationships between teammates, resulting in low learning efficiency in imbalanced data scenarios. To comprehensively explore the data structure of MARL and enhance learning efficiency, we propose Multi-Agent offline coordination via Diffusion-based Trajectory Stitching (MADiTS), a novel diffusion-based data augmentation pipeline that systematically generates trajectories by stitching high-quality coordination segments together. MADiTS first generates trajectory segments using a trained diffusion model, followed by applying a bidirectional dynamics constraint to ensure that the trajectories align with environmental dynamics. Additionally, we develop an offline credit assignment technique to identify and optimize the behavior of underperforming agents in the generated segments. This iterative procedure continues until a satisfactory augmented episode trajectory is generated within the predefined limit or is discarded otherwise. Empirical results on imbalanced datasets of multiple benchmarks demonstrate that MADiTS significantly improves MARL performance.
[ "Multi-agent Reinforcement Learning", "Offline MARL", "Diffusion based Reinforcement Learning", "Trajectory Stitching" ]
Accept (Poster)
https://openreview.net/pdf?id=EpnZEzYDUT
https://openreview.net/forum?id=EpnZEzYDUT
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yFtaXkMHx8", "xGn6N5NVQ8", "veWwDS25Qj", "vbXTZgU0ET", "t7RsU7bsEB", "rYXcHmpDJ7", "md1T1qiarz", "jiYtNGN1nw", "hTOafA2k7A", "gyo5plZBmL", "czQJC94o76", "Zhu043x2En", "YYeYHL7GYh", "Wm8WL14BY1", "WIJ9PuvLpx", "W6PteyPNhS", "RGWJC8osRp", "PWymkaYS9V", "LmNHEXYJsI", "LDmk19y8Uv", "ITIb84ZMg3", "HLSYKc2xEm", "DrJrsna5rB", "7D7HSyIjOY", "6dkdcPUJ72", "6bOE5iGDVp", "63FIBMEuWI", "5Gcy2QrDrR", "2bsu6I6dFI", "2QeogBgdUA", "22iMU1mFJp", "0OG0E3WaB0" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732092384911, 1737524091471, 1732608152661, 1732093801104, 1732093278013, 1732093169536, 1732093194395, 1732093018587, 1732537308180, 1730300792224, 1732092867469, 1732093469224, 1732547889192, 1732620206339, 1732092897088, 1730703103114, 1732697195762, 1732092626686, 1730398568799, 1732410734386, 1732552308956, 1732410670491, 1732699427316, 1732699471589, 1732257878747, 1732092733542, 1732602364189, 1732092804974, 1734766917192, 1732410570429, 1730638258946, 1732093102672 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Reviewer_bitr" ], [ "ICLR.cc/2025/Conference/Submission10913/Reviewer_bitr" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Reviewer_bitr" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Reviewer_Af9P" ], [ "ICLR.cc/2025/Conference/Submission10913/Reviewer_Af9P" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Reviewer_qWY9" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Reviewer_Af9P" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Reviewer_qWY9" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Area_Chair_HkWC" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ], [ "ICLR.cc/2025/Conference/Submission10913/Reviewer_onPK" ], [ "ICLR.cc/2025/Conference/Submission10913/Authors" ] ], "structured_content_str": [ "{\"title\": \"General Response\", \"comment\": \"We really appreciate your efforts in helping us reflect and improve the paper. In light of the reviewers' comments, we made the following major revision based on the previous manuscript and now submitted as the rebuttal version. The revised parts are highlighted in blue.\\n\\n1. A new baseline MADiff is included for all environments in **Section 5.2** and **Appendix H.1**. **(for reviewer Af9P and bitr)**\\n2. Results on more challenging MARL tasks like MAMuJoCo and SMACv2 are included in **Appendix H.1**. **(for reviewer Af9P and qWY9)**\\n3. Results on datasets with a larger number of agents are included in **Appendix H.1**. **(for reviewer qWY9 and bitr)**\\n4. Detailed discussion on key differences with other related methods on offline MARL are discussed in **Appendix B**. **(for reviewer Af9P and bitr)**\\n5. More related works are included in the revised manuscript in **Section 2**. **(for reviewer bitr)**\\n6. Computational cost of our method and baselines are discussed in **Appendix E**. **(for reviewer onPK and bitr)**\\n7. Vague description of \\u201ccontinues until we obtain a satisfactory augmented episode trajectory\\u201d is revised in **Abstract** and **Introduction**. **(for reviewer bitr)**\\n8. Unclear explanation of how we address the imbalance of the dataset in terms of time and space is revised in **Introduction**. **(for reviewer onPK)**\\n9. The meaning of \\u201cstitching\\u201d is further explained in Introduction and **Section 4.1**. **(for reviewer qWY9)**\\n10. Quantitative Analysis of the accuracy and effectiveness of Integrate Gradient is included in **Appendix H.5**. **(for reviewer qWY9)**\\n11. Further Discussion of the limitations and applicability is included in **Conclusion**. **(for reviewer qWY9)**\\n12. Illustration of impact of MADiTS on the diversity of the trajectories is included in **Appendix H.6**. **(for reviewer bitr)**\\n\\nThank you all again for your time and valuable advice! We will address each reviewer's individual concerns momentarily. Feel free to let us know if you have any more comments or questions.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Thank you for your timely response!\", \"comment\": \"We highly appreciate the reviewer\\u2019s insightful feedback, which clearly helped us improve our paper a lot. Regarding the additional experiments on more challenging MARL tasks (MAMuJoCo and SMACv2), due to the 10-page limit for the main paper, we have placed the results in\\u00a0**Table 4 of Appendix H.1**\\u00a0in the revised paper. The analysis of these additional experimental results is also highlighted in blue for clarity. We have also **uploaded a new version of manuscript** and added further **discussion on the application to cooperative-competitive settings and concerns on the use of diffusion models** in **Appendix I**, which is also highlighted for clarity.\\n\\nWe sincerely appreciate the reviewer find our response useful. If you have any further questions, we would be delighted to discuss them in more detail!\"}", "{\"title\": \"Response to Reviewer bitr (Part III)\", \"comment\": \"**Q5: Comparison with MADiff, DOM2, and SIT.**\", \"a5\": \"(continued)\\n- SIT[1] is an offline MARL method that focuses on learning an effective joint policy from agent-wise imbalanced datasets, primarily addressing spatial imbalance. It achieves this through an attention-based reward decomposition network for offline credit assignment, which identifies high-quality individual trajectories for sharing among agents, and a graph attention network for conservative policy training. In contrast, our method, MADiTS, prioritizes improving sample efficiency through a data-oriented augmentation pipeline. By employing a diffusion model-based trajectory stitching mechanism, MADiTS enhances dataset quality to address both temporal and spatial imbalances. Unlike SIT, which directly learns policies from imbalanced datasets, MADiTS generates augmented datasets that can be flexibly used by any offline MARL algorithm, providing enhanced flexibility and modularity.\\n- DOM2[4] is a diffusion model-based offline MARL algorithm that aims to enhance policy expressiveness and diversity. It achieves notable improvements in performance, generalization, and data efficiency through an accelerated solver for diffusion-based policy construction and a policy regularizer, while also scaling up dataset size to boost policy learning. DOM2 excels on balanced datasets, outperforming traditional conservatism-based offline MARL methods. In contrast, MADiTS targets the challenges posed by imbalanced datasets. By addressing temporal and spatial imbalances, MADiTS improves dataset quality, enabling other offline MARL algorithms to achieve better performance and demonstrating its effectiveness in handling imbalanced scenarios.\\n\\nThanks for your constructive suggestions. We have included MADiff[3] in the main experiments as a baseline. And we also discuss the key differences with these related mentioned methods in detail in Appendix B and more related works in the revised manuscript.\\n\\n**References:**\\n\\n[1] Tian, Q., et al. Learning from good trajectories in offline multi-agent reinforcement learning. In *Association for the Advancement of Artificial Intelligence*, pp. 11672\\u201311680, 2023.\\n\\n[2] Zheng, L., et al. Magent: A many-agent reinforcement learning platform for artificial collective intelligence. In *Association for the Advancement of Artificial Intelligence*. 2018, 32(1).\\n\\n[3] Zhu, Z., et al. MADiff: Offline multi-agent learning with diffusion models. *arXiv preprint arXiv:2305.17330*, 2023.\\n\\n[4] Li, Z., et al. Beyond conservatism: Diffusion policies in offline multi-agent reinforcement learning. *arXiv preprint arXiv:2307.01472*, 2023.\"}", "{\"title\": \"Response to Reviewer bitr (Part I)\", \"comment\": \"Thank you very much for carefully reviewing our paper and providing constructive comments and suggestions. In response, we have added experiments and comparisons with different methods, which are presented as follows:\\n\\n**Q1: Reasons of generating the imbalanced dataset using random perturbations on expert trajectories.**\", \"a1\": \"Our primary goal in generating imbalanced datasets is to create scenarios more challenging than those in mixed-quality datasets by explicitly introducing temporal and spatial imbalances. This approach pushes the boundaries of data augmentation techniques in Offline MARL. To achieve this, we apply random perturbations to expert trajectories, simulating both types of imbalances. This method constructs datasets with more pronounced imbalances compared to simple mixing strategies, allowing for a more rigorous evaluation of our method under challenging task settings.\\n\\nBuilding on a general framework for addressing data imbalance in data augmentation [1], our approach extends beyond spatial imbalance to also tackle temporal imbalance. This expansion broadens the scope of our method and introduces additional challenges, enabling exploration of more diverse and realistic multi-agent settings. These innovations highlight the unique aspects of our approach. \\n\\nRegarding to the performance of our method on the mixed-quality datasets mentioned by the reviewer, as shown in the simplified table below, MADiTS still significantly enhances dataset quality in mixed-quality md-replay datasets (with lines bolded). In tested environments, its performance approaches or even matches that of medium-quality datasets, demonstrating its ability to effectively leverage limited high-quality data to improve suboptimal data segments. For more detailed results, please refer to Table 5 in the revised manuscript.\\n\\n| Env | Dataset | Original | MADiTS |\\n| --- | --- | --- | --- |\\n| CN | expert | 100.42 \\u00b1 5.67 | 127.91 \\u00b1 5.63 |\\n| | medium | 80.46 \\u00b1 5.05 | 91.31 \\u00b1 2.24 |\\n| | **md-replay** | **32.17 \\u00b1 4.51** | **84.47 \\u00b1 8.04** |\\n| | random | 2.69 \\u00b1 4.24 | 22.38 \\u00b1 4.59 |\\n| PP | expert | 77.54 \\u00b1 5.73 | 97.04 \\u00b1 11.81 |\\n| | medium | 54.14 \\u00b1 10.19 | 63.25 \\u00b1 10.15 |\\n| | **md-replay** | **-6.43 \\u00b1 3.21** | **25.04 \\u00b1 11.45** |\\n| | random | -8.94 \\u00b1 1.55 | -7.38 \\u00b1 2.49 |\\n| World | expert | 76.89 \\u00b1 23.08 | 138.10 \\u00b1 34.01 |\\n| | medium | 54.99 \\u00b1 14.27 | 96.27 \\u00b1 11.78 |\\n| | **md-replay** | **19.28 \\u00b1 4.06** | **29.46 \\u00b1 4.65** |\\n| | random | 6.19 \\u00b1 3.08 | 14.01 \\u00b1 2.95 |\\n\\n**Q2: The impact of MADiTS on the diversity of the trajectories.**\", \"a2\": \"The dynamics constraints and behavior correction mechanism not only preserve the diversity of trajectories but also significantly improve data quality. To illustrate this, we provide visualizations of the synthesized trajectories under a fixed, non-stochastic environment's initial state, defined by the agents' starting positions and landmark locations. Comparisons between MADiTS and its variant without dynamics constraints or behavior correction reveal that our approach generates data with broader coverage and higher diversity. Further details and analysis can be found in Figure 7 and Appendix H.6.\\n\\n**Q3: Clarity regarding the computational cost of MADiTS and vague descriptors.**\", \"a3\": \"Our method achieves computational complexity comparable to other baseline approaches. To demonstrate this, we provide a summary of the computational costs for MADiTS and baseline methods on the Cooperative Navigation task. The experiments were conducted on servers equipped with GeForce RTX 2080 Ti GPUs. As shown in the table below, MADiTS maintains computational complexity similar to other methods, staying within an acceptable range while ensuring competitive performance.\\n\\n| **Method** | **Model Training (GPU Hours)** | **Trajectory Processing (GPU Hours)** | **Total (GPU Hours)** |\\n| --- | --- | --- | --- |\\n| **MADiTS** | 36 | 4 | 40 |\\n| **MADiff** | 36 | - | 36 |\\n| **MA-MBTS** | - | 48 | 48 |\\n\\nAs for the vague descriptors, we have changed the sentence from \\u201cThis iterative procedure continues until we obtain a satisfactory augmented episode trajectory\\u201d to \\u201cThis iterative procedure continues until a satisfactory augmented episode trajectory is generated **within the predefined limit or is discarded otherwise**\\u201d. We greatly appreciate your suggestion.\"}", "{\"title\": \"Response to Reviewer qWY9 (Part II)\", \"comment\": \"**Q4: Analysis on effects of integrated gradient to identify underperforming agents.**\", \"a4\": \"Integrated Gradients (IG)[8] is a widely used neural network interpretability method that uses path integrals of gradients along the path between a baseline input and the actual input to determine how each feature influences the model\\u2019s output. This approach tracks the change in predictions from $F(b)$ to $F(x)$ as the input moves from the baseline $b$ to the actual input $x$. We utilize IG\\u2019s attribution capabilities to identify underperforming agents in joint trajectory segments. Specifically, in MADiTS, we employ PathIG to estimate each agent\\u2019s contribution at every timestep using a trainable reward function. These contributions are aggregated to identify underperforming agents. For instance, in a 3-agent environment (agents 0, 1, and 2), if agent 2 consistently underperforms and provides minimal contributions to the team\\u2019s overall performance, PathIG assigns low contribution scores to agent 2 across multiple timesteps. Consequently, the aggregated ranking for agent 2 will be lower than those for agents 0 and 1, clearly identifying agent 2 as underperforming.\\n\\nTo quantitatively evaluate the accuracy of IG-based agent rankings, we analyzed the **Kendall correlation coefficient**[9] between the estimated contribution values from IG and the ground-truth contributions in the Cooperative Navigation task. In this task, the proximity of agents to landmarks determines the overall return. Using the true reward function of the environment (available only for analysis), we calculated each agent\\u2019s actual contribution at every timestep as the ground truth. The Kendall correlation coefficient was then computed to assess the alignment between IG-derived rankings and ground-truth rankings for each trajectory.\\n\\nAs shown in the distribution of Kendall correlation coefficients, most values are **positive**, indicating that IG aligns well with the ground-truth contributions. This demonstrates that our PathIG-based offline credit assignment method achieves a **high degree of accuracy**. Additional details on the results and methodology are provided in Figure 6 and Appendix H.5. We deeply appreciate your feedback, which has helped enhance the clarity and presentation of our method.\\n\\n**Q5: The extent to which regeneration can improve the overall reward.**\", \"a\": \"Trajectory regeneration can enhance overall rewards in MARL but is inherently influenced by the quality of the collected training data, a common challenge faced by popular generative models [10]. Our method leverages the robust data distribution modeling capabilities of diffusion models to sample and generate high-quality cooperative trajectory segments for data augmentation, even when these segments are scarce in the original dataset. By addressing dataset imbalances, MADiTS demonstrates significant improvements, as shown in our experiments. Specifically, regeneration reduces the average number of underperforming agents from 0.38 to 0.09 in the CN exp-m dataset, underscoring the efficiency of our approach.\\n\\nRegarding the extreme scenario raised by the reviewer, where the dataset entirely lacks high-quality cooperative segments, generating such Out-of-Distribution (OOD) segments remains a notable challenge in data generation [11]. One potential direction to overcome this issue is integrating external knowledge, such as leveraging Large Language Models (LLMs) in multi-agent RL [12]. We deeply appreciate the reviewer\\u2019s insightful question and plan to explore this promising avenue in future work.\"}", "{\"title\": \"Response to Reviewer qWY9 (Part III)\", \"comment\": \"**References:**\\n\\n[1] Zheng, L., et al. Magent: A many-agent reinforcement learning platform for artificial collective intelligence. In *Proceedings of the AAAI conference on artificial intelligence*. 2018, 32(1).\\n\\n[2] Kim, S., et al. Stitching Sub-trajectories with Conditional Diffusion Model for Goal-Conditioned Offline RL. In *Proceedings of the AAAI Conference on Artificial Intelligence*. 2024, 38(12): 13160-13167.\\n\\n[3] Li, G., et al. Diffstitch: Boosting offline reinforcement learning with diffusion-based trajectory stitching. In *International Conference on Machine Learning*, pp. 28597\\u201328609, 2024.\\n\\n[4] Hepburn, C. A., et al. Model-based trajectory stitching for improved offline reinforcement learning. *arXiv preprint arXiv:2211.11603*, 2022.\\n\\n[5] Yamagata, T., et al. Q-learning decision transformer: Leveraging dynamic programming for conditional sequence modelling in offline RL. In *International Conference on Machine Learning*. 2023: 38989-39007.\\n\\n[6] Wu, Y. H., et al. Elastic decision transformer. In *Advances in Neural Information Processing Systems*, 2023.\\n\\n[7] Badrinath, A., et al. Waypoint transformer: Reinforcement learning via supervised learning with intermediate targets. In *Advances in Neural Information Processing Systems*, 2023.\\n\\n[8] Sundararajan, M., et al. Axiomatic attribution for deep networks. In *International Conference on Machine Learning*, pp. 3319\\u20133328, 2017.\\n\\n[9] Kendall, M. G., et al. The treatment of ties in ranking problems. *Biometrika*, 33(3):239\\u2013251, 1945.\\n\\n[10] Cao, H., et al. A survey on generative diffusion models. In *IEEE Transactions on Knowledge and Data Engineering*, 2024.\\n\\n[11] Zhu, Y., et al. Unseen Image Synthesis with Diffusion Models. *arXiv preprint arXiv:2310.09213*, 2023.\\n\\n[12] Sun, C., et al. LLM-based Multi-Agent Reinforcement Learning: Current and Future Directions. *arXiv preprint arXiv:2405.11106*, 2024.\"}", "{\"title\": \"Response to Reviewer onPK\", \"comment\": \"Thank you for your inspiring and thoughtful reviews. We have prepared the following experimental results and comments for your proposed weakness and questions, and we hope they can relieve your concern:\\n\\n**Q1: Experimental analysis on methods with different hyperparameters and how they are determined.**\", \"a1\": \"We use grid search to select these hyperparameters, which is popular in hyperparameters selection[1]. Take $\\\\delta_{recon}$ as an example, which determines the strictness of the dynamics constraint. We first specify a list of candidate values, such as [0.003, 0.005, 0.01, 0.02, 0.05], and then evaluate the filtering strictness on a small-scale dataset. If $\\\\delta_{recon}$ is too small, stitching a complete trajectory will take a long time. Conversely, if $\\\\delta_{recon}$ is too large, no generated trajectories will be discarded. Details of experimental results for other key hyperparameters are included in Appendix H.3 of the revised paper.\\n\\n**Q2: Regarding to computational complexity.**\", \"a2\": \"Our method achieves computational complexity comparable to other baseline approaches. To demonstrate this, we provide a summary of the computational costs for MADiTS and baseline methods on the Cooperative Navigation task. The experiments were conducted on servers equipped with GeForce RTX 2080 Ti GPUs. As shown in the table below, MADiTS maintains computational complexity similar to other methods, staying within an acceptable range while ensuring competitive performance.\\n\\n| **Method** | **Model Training (GPU Hours)** | **Trajectory Processing (GPU Hours)** | **Total (GPU Hours)** |\\n| --- | --- | --- | --- |\\n| **MADiTS** | 36 | 4 | 40 |\\n| **MADiff** | 36 | - | 36 |\\n| **MA-MBTS** | - | 48 | 48 |\\n\\n**Q3: How does the proposed method address the imbalance of the dataset in terms of time and space?**\", \"a3\": \"We leverage a diffusion model to seamlessly stitch trajectory segments across different timesteps, effectively addressing **temporal imbalances**. To tackle **spatial imbalances**, we employ Integrated Gradients to identify these imbalances and optimize them by regenerating data through the diffusion model. We apologize for the earlier unclear explanation and have revised the introduction for clarity in the updated manuscript.\\n\\n**Reference:**\\n\\n[1] Liashchynskyi P., et al. Grid search, random search, genetic algorithm: a big comparison for NAS. *arXiv preprint arXiv:1912.06059*, 2019.\"}", "{\"title\": \"Follow Up\", \"comment\": \"Thanks to the authors for their detailed response and for addressing some of my initial questions. I have two follow-up queries for further clarification:\\n\\n1. What was the computational cost of the 12m experiments across the different methods (MADiTS, MADiff, MA-MBTS)? This would help better understand how the approach scales computationally with respect to the number of agents.\\n2. Do you have quantifiable measures of agent diversity? While examining trajectories is informative, it may not adequately capture or quantify diverse behaviours in a rigorous way.\"}", "{\"summary\": \"The authors propose a method called MADiTS to enhance the quality of imbalanced offline datasets in Multi-Agent Reinforcement Learning (MARL). By applying diffusion-based data augmentation, MADiTS generates high-return trajectories while enforcing dynamics constraints to ensure they align with the environment dynamics. Additionally, a behavior correction mechanism correct the actions of suboptimal agents, improving their trajectories within shared reward contexts. Experiments on MPE and SMAC datasets show that MADiTS can improve the performance of offline algorithms like Behavior Cloning (BC) and other offline algorithms.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The authors address an important issue of dataset quality in MARL, particularly relevant for offline applications where data imbalance often impacts policy performance in real-world scenarios.\", \"The paper is well-written, clear and easy to follow.\", \"The authors apply their method to datasets collected on SMAC and MPE, two popular MARL environments, and apply it to behavior cloning (BC) and two modern offline MARL algorithms - OMIGA and CFCQL.\", \"The authors provide an ablation study to show the impact of different components of their MADiTS method.\"], \"weaknesses\": [\"Methodology:\", \"The authors provide details on how they generate the imbalanced dataset, using random perturbations on expert trajectories. However, it is not clear why this is better than using a dataset collected by mixed policies, e.g. taking medium and expert policies and combining their trajectories in a dataset. This appears to be more realistic and relevant to the problem of imbalanced datasets in MARL. A discussion on this choice would be relevant.\", \"Work such as [2,3] highlights the importance of diverse trajectories in offline MARL. This impact of MADiTS on the diversity of the trajectories is not discussed in the paper.\", \"Clarity is needed regarding the computational cost of MADiTS. Details such as runtime and comparison with other trajectory augmentation methods (e.g., MA-MBTS or baseline models) would be relevant, especially since the text includes vague descriptors like \\u201cuntil a satisfactory trajectory is obtained.\\u201d Further, insights into how MADiTS scales with agent numbers or in larger MARL environments would enhance applicability.\"], \"comparisons_with_related_diffusion_models\": \"- In the appendix, the authors mention they use MaDiff [1] architecture for the diffusion model. However, the original MaDiff model is not included as a baseline in the main experiments, nor is Diffusion Offline Multi-agent Model (DOM2) [3], another relevant diffusion-based MARL trajectory augmentation approach. Furthermore, Shared Individual Trajectories (SIT) [2], another work that aims at helping with imbalanced datasets in offline MARL is only briefly mentioned in the appendix, and not mentioned in the main text. A comparison with MaDiff (or clarification if \\u201cw/o dc + pn\\u201d in Figure 4a is equivalent to MaDiff) would strengthen the evaluation. Including DOM2 and SIT as baselines or discussing the differences would clarify MADiTS's unique contributions to diffusion-based MARL augmentation. \\n\\n1] Zhu Z, Liu M, Mao L, Kang B, Xu M, Yu Y, Ermon S, Zhang W. Madiff: Offline multi-agent learning with diffusion models. arXiv preprint arXiv:2305.17330. 2023 May 27.\\n\\n2] Tian Q, Kuang K, Liu F, Wang B. Learning from good trajectories in offline multi-agent reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence 2023 Jun 26 (Vol. 37, No. 10, pp. 11672-11680).\\n\\n3] Li Z, Pan L, Huang L. Beyond conservatism: Diffusion policies in offline multi-agent reinforcement learning. arXiv preprint arXiv:2307.01472. 2023 Jul 4.\", \"questions\": \"1. Why does this approach generate an imbalanced dataset by perturbing expert trajectories? Why is this better than using a dataset collected by mixed policies and combine their trajectories in a dataset?\\n2. How does the proposed method impact the diversity of the trajectories in the augmented dataset? Do the dynamics constraints or behavior correction mechanism hurt the diversity of the trajectories?\\n3. What is the computational cost of the proposed method compared to the baselines? How does the proposed method scale with the number of agents, particularly with the integrated gradient computation?\\n4. How does MADiTS compare to standalone MaDiff, DOM2 and SIT? What are the key differences, and why were these methods not used as baselines in the primary experiments?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Af9P (Part IV)\", \"comment\": \"**Q6: Results on balanced but bad datasets.**\", \"a6\": \"Our method shows significant improvements even on balanced but low-quality datasets. Specifically, we use the multi-agent offline dataset in MPE with continuous action spaces, as constructed in OMAR[8] for data augmentation. This dataset includes trajectories generated by fully trained expert policies, medium-performing policies, and random policies in an online environment. As demonstrated in the table below, MADiTS consistently enhances learning efficiency across expert, medium, md-replay, and random data settings, showcasing its versatility and robustness across diverse scenarios.\\n\\n| Env | Dataset | Original | MADiTS |\\n| --- | --- | --- | --- |\\n| CN | expert | 100.42 \\u00b1 5.67 | 127.91 \\u00b1 5.63 |\\n| | medium | 80.46 \\u00b1 5.05 | 91.31 \\u00b1 2.24 |\\n| | md-replay | 32.17 \\u00b1 4.51 | 84.47 \\u00b1 8.04 |\\n| | random | 2.69 \\u00b1 4.24 | 22.38 \\u00b1 4.59 |\\n| PP | expert | 77.54 \\u00b1 5.73 | 97.04 \\u00b1 11.81 |\\n| | medium | 54.14 \\u00b1 10.19 | 63.25 \\u00b1 10.15 |\\n| | md-replay | -6.43 \\u00b1 3.21 | 25.04 \\u00b1 11.45 |\\n| | random | -8.94 \\u00b1 1.55 | -7.38 \\u00b1 2.49 |\\n| World | expert | 76.89 \\u00b1 23.08 | 138.10 \\u00b1 34.01 |\\n| | medium | 54.99 \\u00b1 14.27 | 96.27 \\u00b1 11.78 |\\n| | md-replay | 19.28 \\u00b1 4.06 | 29.46 \\u00b1 4.65 |\\n| | random | 6.19 \\u00b1 3.08 | 14.01 \\u00b1 2.95 |\\n\\n**Q7: Ethical concerns on the use of diffusion models.**\", \"a7\": \"Biases in the original offline dataset can propagate into the diffusion model, potentially affecting the quality of the generated trajectories. To mitigate this, our method employs a bidirectional dynamics constraint, ensuring that the generated trajectories remain consistent with the environmental dynamics. Additionally, we integrate an offline credit assignment technique to identify and optimize the performance of underperforming agents within the generated trajectory segments, further enhancing the overall quality and utility of the augmented data.\\n\\nOn one hand, generative models like ChatGPT[9] and SORA[10] rely on large-scale datasets to train architectures such as Transformers or diffusion-based models. These models exhibit exceptional generative capabilities across domains like language and video, aligning with scaling laws that link performance to data size. Recognizing the importance of data, these methods often use autoregressive training or advanced techniques to optimize data fitting. For real-world applications such as autonomous driving[11], healthcare[12], and finance[13], diffusion models require extensive and diverse datasets to ensure robust performance. Recent advancements have highlighted the potential of diffusion models to transform these domains.\\n\\nOn the other hand, despite their capabilities, these methods face challenges such as unreliable or unrepresentative data. To ensure reliability in real-world applications, techniques like human-in-the-loop testing[14] and risk control mechanisms[15] are crucial.\\n\\nTo address the issue of synthetic data deviating from real-world distributions, our method MADiTS introduces a bidirectional dynamics constraint to align generated trajectories with environmental dynamics. Moreover, the offline credit assignment technique enhances robustness by identifying and improving underperforming agents in generated segments. Experimental results validate the effectiveness of MADiTS in overcoming these challenges.\\n\\nWe appreciate your insightful concerns regarding the use of diffusion models with offline datasets and look forward to continued discussions on advancing this field.\"}", "{\"title\": \"Response to Reviewer bitr (Part II)\", \"comment\": \"**Q4: How MADiTS performs in larger MARL environments.**\", \"a1\": \"MADiTS enhances sample efficiency across various methods, even as the number of agents increases. To validate its effectiveness, we conducted experiments on the 12m map from SMAC, with 12 marine agents and 12 marine enemies. As shown in the table below, MADiTS outperforms the baseline method MA-MBTS and shows competitive results compared to MADiff, demonstrating its broad applicability. As for the large scale MARL setting with hundreds or thousands of agents[2], solving the scalability issue by techniques like grouping would be of great interest, and we leave it in the future work.\\n\\n| Envs | Algs | Balanced | Original | | MA-MBTS | | MADiff | | MADiTS | |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| | | exp | exp-m | exp-s | exp-m | exp-s | exp-m | exp-s | exp-m | exp-s |\\n| 12m | BC | 0.95 \\u00b1 0.03 | 0.83 \\u00b1 0.11 | 0.63 \\u00b1 0.28 | 0.81 \\u00b1 0.11 | 0.56 \\u00b1 0.22 | 0.85 \\u00b1 0.12 | **0.65 \\u00b1 0.21** | **0.88 \\u00b1 0.07** | **0.65 \\u00b1 0.12** |\\n| | OMIGA | 0.97 \\u00b1 0.03 | 0.94 \\u00b1 0.02 | 0.85 \\u00b1 0.02 | 0.95 \\u00b1 0.03 | 0.85 \\u00b1 0.03 | **0.96 \\u00b1 0.03** | 0.86 \\u00b1 0.04 | 0.95 \\u00b1 0.02 | **0.87 \\u00b1 0.02** |\\n| | CFCQL | 0.90 \\u00b1 0.04 | 0.78 \\u00b1 0.09 | 0.50 \\u00b1 0.11 | 0.70 \\u00b1 0.11 | 0.46 \\u00b1 0.10 | 0.79 \\u00b1 0.07 | 0.53 \\u00b1 0.09 | **0.80 \\u00b1 0.05** | **0.63 \\u00b1 0.08** |\\n| Average | | 0.94 | 0.85 | 0.66 | 0.82 | 0.62 | 0.86 | 0.68 | **0.87** | **0.71** |\\n\\n**Q5: Comparison with MADiff, DOM2, and SIT.**\", \"a5\": \"Our method focuses on addressing sample efficiency issues by leveraging diffusion models to perform trajectory stitching for data augmentation in temporally and spatially imbalanced datasets, allowing offline MARL algorithms to achieve better performance by learning from the enhanced dataset, and is different from the mentioned works:\\n\\n- MADiff[3] is a diffusion model-based offline MARL algorithm designed to predict future joint actions for decision-making by modeling teammate behaviors. It uses a diffusion model to capture joint observation sequences and infer actions for planning, achieving strong performance on balanced datasets and excelling in standard offline MARL settings. In contrast, MADiTS specifically targets the challenges of data imbalance in offline MARL, focusing on improving sample efficiency through innovative data augmentation techniques. While MADiff enhances learning efficiency in balanced scenarios, MADiTS addresses the unique difficulties posed by imbalanced datasets. To highlight these distinctions, we compare the performance of MADiff, extended for data augmentation, with MADiTS across various environments. The results demonstrate MADiTS\\u2019s superior effectiveness in tackling data imbalance challenges.\\n\\n| Envs | Algs | Original | | MADiff | | MADiTS | |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| | | exp-m | exp-s | exp-m | exp-s | exp-m | exp-s |\\n| CN | BC | 17.27 \\u00b1 3.66 | 9.03 \\u00b1 3.40 | 40.44 \\u00b1 3.75 | 30.24 \\u00b1 4.82 | **43.44 \\u00b1 6.11** | **37.82 \\u00b1 4.45** |\\n| | OMIGA | -2.27 \\u00b1 57.07 | -11.60 \\u00b1 58.90 | 19.26 \\u00b1 67.68 | 7.27 \\u00b1 66.89 | **23.02 \\u00b1 69.69** | **22.91 \\u00b1 44.94** |\\n| | CFCQL | -33.40 \\u00b1 44.28 | -56.44 \\u00b1 40.38 | 30.08 \\u00b1 12.97 | 21.88 \\u00b1 10.71 | **39.57 \\u00b1 16.14** | **28.60 \\u00b1 20.94** |\\n| PP | BC | 48.49 \\u00b1 4.69 | 49.21 \\u00b1 3.90 | 49.55 \\u00b1 6.50 | 49.93 \\u00b1 3.74 | **54.85 \\u00b1 4.23** | **55.50 \\u00b1 4.28** |\\n| | OMIGA | 37.90 \\u00b1 25.34 | 26.73 \\u00b1 40.67 | 47.48 \\u00b1 20.97 | 58.44 \\u00b1 4.76 | **63.02 \\u00b1 3.40** | **63.71 \\u00b1 5.67** |\\n| | CFCQL | 45.03 \\u00b1 4.62 | 28.88 \\u00b1 6.29 | 45.50 \\u00b1 6.76 | 30.04 \\u00b1 7.02 | **47.38 \\u00b1 3.74** | **32.25 \\u00b1 10.98** |\\n| World | BC | 47.29 \\u00b1 3.00 | 48.30 \\u00b1 5.96 | 50.79 \\u00b1 2.62 | 51.55 \\u00b1 4.96 | **54.25 \\u00b1 4.35** | **52.84 \\u00b1 5.29** |\\n| | OMIGA | 54.23 \\u00b1 6.20 | 52.92 \\u00b1 5.64 | 49.55 \\u00b1 24.99 | 48.26 \\u00b1 24.75 | **57.35 \\u00b1 5.89** | **58.59 \\u00b1 9.32** |\\n| | CFCQL | 28.59 \\u00b1 1.90 | 18.14 \\u00b1 18.16 | 28.63 \\u00b1 10.14 | 34.35 \\u00b1 6.91 | **29.62 \\u00b1 10.91** | **39.36 \\u00b1 1.91** |\\n| Average | | 27.01 | 18.35 | 40.14 | 36.88 | **45.83** | **43.50** |\\n| 2m_vs_1z | BC | 0.03 \\u00b1 0.06 | 0.03 \\u00b1 0.06 | 0.30 \\u00b1 0.18 | 0.30 \\u00b1 0.18 | **0.35 \\u00b1 0.20** | **0.35 \\u00b1 0.20** |\\n| | OMIGA | 0.59 \\u00b1 0.28 | 0.59 \\u00b1 0.28 | 0.97 \\u00b1 0.04 | 0.97 \\u00b1 0.04 | **0.98 \\u00b1 0.02** | **0.98 \\u00b1 0.02** |\\n| | CFCQL | 0.72 \\u00b1 0.24 | 0.72 \\u00b1 0.24 | 0.92 \\u00b1 0.05 | 0.92 \\u00b1 0.05 | **0.94 \\u00b1 0.05** | **0.94 \\u00b1 0.05** |\\n| 3m | BC | 0.42 \\u00b1 0.05 | 0.38 \\u00b1 0.20 | 0.44 \\u00b1 0.18 | 0.36 \\u00b1 0.15 | **0.51 \\u00b1 0.19** | **0.43 \\u00b1 0.18** |\\n| | OMIGA | 0.93 \\u00b1 0.05 | 0.90 \\u00b1 0.03 | **1.00 \\u00b1 0.00** | 0.94 \\u00b1 0.03 | **1.00 \\u00b1 0.00** | **0.96 \\u00b1 0.03** |\\n| | CFCQL | 0.90 \\u00b1 0.06 | 0.80 \\u00b1 0.08 | 0.89 \\u00b1 0.05 | 0.81 \\u00b1 0.09 | **0.94 \\u00b1 0.07** | **0.89 \\u00b1 0.06** |\\n| 2s3z | BC | 0.73 \\u00b1 0.11 | 0.68 \\u00b1 0.14 | 0.75 \\u00b1 0.06 | 0.67 \\u00b1 0.05 | **0.78 \\u00b1 0.06** | **0.69 \\u00b1 0.26** |\\n| | OMIGA | 0.86 \\u00b1 0.16 | 0.67 \\u00b1 0.05 | **1.00 \\u00b1 0.00** | 0.71 \\u00b1 0.15 | **1.00 \\u00b1 0.00** | **0.76 \\u00b1 0.19** |\\n| | CFCQL | 0.73 \\u00b1 0.15 | 0.66 \\u00b1 0.14 | 0.80 \\u00b1 0.09 | 0.63 \\u00b1 0.15 | **0.92 \\u00b1 0.02** | **0.68 \\u00b1 0.26** |\\n| Average | | 0.65 | 0.60 | 0.78 | 0.70 | **0.82** | **0.74** |\\n\\n(to be continued)\"}", "{\"title\": \"Thank you for your continued attention to our work!\", \"comment\": \"Thank you for your continued attention to our work. We appreciate your further inquiry, and we are happy to clarify the points raised.\\n\\n**Q1: computational cost of the 12m experiments.**\\n\\nThe computational cost of model training for the compared methods increases slightly but remains within an acceptable range. To illustrate, we present the specific computational cost for a single training run on the SMAC task, using the 3m and 12m environments as examples:\\n\\n| **Map** | **Method** | **Model Training (GPU Hours)** | **Trajectory Processing (GPU Hours)** | **Total (GPU Hours)** |\\n| --- | --- | --- | --- | --- |\\n| **3m** | **MADiTS** | 35 | 4 | 39 |\\n| | **MADiff** | 35 | - | 35 |\\n| | **MA-MBTS** | - | 43 | 43 |\\n| **12m** | **MADiTS** | 38 | 6 | 44 |\\n| | **MADiff** | 37 | - | 37 |\\n| | **MA-MBTS** | - | 54 | 54 |\\n\\nWhen dealing with environments involving hundreds or thousands of agents, addressing the scalability challenge through techniques such as grouping becomes a topic of great interest. We acknowledge this valuable suggestion and plan to explore it in future work. Thank you for raising this important question.\\n\\n**Q2: quantifiable measures of agent diversity.**\\n\\nBuilding on the illustration provided in Figure 7, we proceed with a quantitative analysis using the mean Average Displacement Error (mean ADE) [1], a distance-based metric for evaluating the diversity of new trajectories generated from fixed initial states. The mean ADE is calculated as follows:\\n\\n$$\\n\\\\text{mean ADE} = \\\\mathbb{E} _{(\\\\tau_i, \\\\tau_j)} \\\\left[\\\\frac{1}{T}\\\\sum _{t=1}^T \\\\mathscr{D}(p_t^i, p_t^j)\\\\right]\\n$$\\nwhere $\\\\tau_i$ and $\\\\tau_j$ denote two trajectories sampled from the augmented dataset generated under identical initial states. The positions of the agents at timestep $t$ are represented by $p_t^i$ and $p_t^j$, respectively, and $\\\\mathscr{D}$ is the distance metric, defined here as the Euclidean distance. To estimate the expectation, we fixed identical initial states and generated 256 trajectories using our MADiTS method through trajectory stitching. This metric evaluates the diversity of behaviors, where a higher value indicates greater diversity.\\n\\nWe conducted experiments using five different sets of fixed initial states. For MADiTS, the mean ADE is **is 0.184 \\u00b1 0.078**, whereas its variant without dynamics constraints or behavior correction shows a mean ADE of **0.213 \\u00b1 0.116**. Our method significantly improves overall performance, the reduction in mean ADE is only limited to 13%. It is important to note that under fixed initial states, the theoretical maximum return is unique and inherently lacks diversity. Thus, we consider the relationship between high returns and diversity to be a trade-off. Our method achieves higher performance while maintaining comparable diversity levels.\\n\\nThank you all again for your time and valuable advice! Feel free to let us know if you have any more comments or questions.\\n\\n\\n**References:**\\n\\n[1] Pellegrini, S., Ess, A., Schindler, K., Van Gool, L.: You\\u2019ll never walk alone: Modeling social behavior for multi-target tracking. In: 2009 IEEE 12th International Conference on Computer Vision. pp. 261\\u2013268. IEEE (2009)\"}", "{\"comment\": \"Thanks for the detailed response. After considering the rebuttal points, I will increase my score by one point.\"}", "{\"title\": \"Response to Reviewer Af9P (Part V)\", \"comment\": \"**References:**\\n\\n[1] Lu, C., et al. Synthetic experience replay. In *Advances in Neural Information Processing Systems*, 2023.\\n\\n[2] Ellis, B., et al. SMACv2: An improved benchmark for cooperative multi-agent reinforcement learning. In *Advances in Neural Information Processing Systems*, 2023.\\n\\n[3] Peng, B., et al. FACMAC: Factored multi-agent centralised policy gradients. In *Advances in Neural Information Processing Systems*, 34:12208\\u201312221, 2021.\\n\\n[4] Tian, Q., et al. Learning from good trajectories in offline multi-agent reinforcement learning. In *Association for the Advancement of Artificial Intelligence*, pp. 11672\\u201311680, 2023.\\n\\n[5] Li, Z., et al. Beyond conservatism: Diffusion policies in offline multi-agent reinforcement learning. *arXiv preprint arXiv:2307.01472*, 2023.\\n\\n[6] Zhu, Z., et al. MADiff: Offline multi-agent learning with diffusion models. *arXiv preprint arXiv:2305.17330*, 2023.\\n\\n[7] Oroojlooy, A., et al. A review of cooperative multi-agent deep reinforcement learning. *Applied Intelligence*, pp. 1\\u201346, 2022.\\n\\n[8] Pan, L., et al. Plan better amid conservatism: Offline multi-agent reinforcement learning with actor rectification. In *International Conference on Machine Learning*, pp. 17221\\u201317237, 2022.\\n\\n[9] OpenAI. ChatGPT: Optimizing language models for dialogue. 2023. URL: https://www.openai.com/chatgpt.\\n\\n[10] Brooks, T., et al. Video generation models as world simulators. 2024. URL: https://openai.com/research/video-generation-models-as-world-simulators.\\n\\n[11] Wang, X., et al. DriveDreamer: Towards real-world-driven world models for autonomous driving. In *European Conference on Computer Vision*, 2024.\\n\\n[12] Kazerouni, A., et al. Diffusion models in medical imaging: A comprehensive survey. *Medical Image Analysis*, 2023, 88: 102846.\\n\\n[13] Wang, Z., et al. A financial time series denoiser based on diffusion models. In *Proceedings of the 5th ACM International Conference on AI in Finance*. 2024: 72\\u201380.\\n\\n[14] Singi, S., et al. Decision making for human-in-the-loop robotic agents via uncertainty-aware reinforcement learning. In *2024 IEEE International Conference on Robotics and Automation (ICRA)*. IEEE, 2024: 7939\\u20137945.\\n\\n[15] Yang, Q., et al. WCSAC: Worst-case soft actor critic for safety-constrained reinforcement learning. In *Proceedings of the AAAI Conference on Artificial Intelligence*. 2021, 35(12): 10639\\u201310646.\"}", "{\"summary\": \"The paper investigates the use of diffusion models for offline multi-agent reinforcement learning (MARL), extending certain diffusion-based approaches originally developed for single-agent offline RL to a multi-agent setting. Specifically, the authors combine Denoising Diffusion Probabilistic Models (DDPM), stitching techniques (to merge high-reward and low-reward data points and address data imbalance), and Integrated Gradients (IG) for identifying underperforming agents. This results in a data augmentation process that can enhance offline datasets and other MARL algorithms. Experiments are conducted using MPE and SMACv1, with comparisons to BC and OMIGA.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The diffusion approach makes sense. Using a diffusion model to enrich offline datasets could support several other MARL algorithms.\", \"The employed techniques align with recent advancements in diffusion-based RL.\", \"The paper is well-written and easy to follow.\"], \"weaknesses\": \"1. The techniques seem to be a straightforward extension from diffusion-based single-agent RL.\\n2. Most techniques, except those in Section 4.2, appear to be directly adapted from single-agent settings, with single-agent states and actions replaced by global observations and actions.\\n3. There is already existing work on diffusion models for offline MARL. For instance, how does this approach compare to MADiff [1], which also develops diffusion models to enhance offline datasets in offline MARL? Notably, MADiff compares its methods against several MARL baselines, which seems lacking in this work.\\n4. In addition to MPE and SMACv1, more challenging MARL tasks like MAMuJoCo and SMACv2 should be considered for comparisons.\\n5. It\\u2019s unclear how this approach compares to other data-augmentation techniques in offline MARL. Additional citations, discussions, and experimental comparisons would be needed.\\n\\n\\n**Reference:**\\n[1] Zhu, Z., Liu, M., Mao, L., Kang, B., Xu, M., Yu, Y., Ermon, S., & Zhang, W. (2023). MADiff: Offline Multi-agent Learning with Diffusion Models. arXiv preprint arXiv:2305.17330.\", \"questions\": [\"How does your approach compare to MADiff?\", \"Can these methods be applied to cooperative-competitive settings?\", \"Would the approach work effectively with balanced but bad datasets? What would happen if the data were balanced but consisted primarily of transitions with low rewards?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": [\"There might be some ethical concerns:\", \"Diffusion models trained on offline datasets may inherit biases from the original data, particularly if it is imbalanced or contains biases toward certain actions or outcomes. This could result in the reinforcement of unintended behaviors, potentially harmful in applications like autonomous driving, healthcare, or finance.\", \"If the diffusion model generates synthetic data that diverges from real-world scenarios, this could lead to decisions that are unreliable in practice.\", \"The diffusion model could produce agent interactions that are difficult to predict or manage, raising questions about accountability if something goes wrong.\"], \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for the updates. I have increased my rating.\\n\\nBest --\"}", "{\"title\": \"Response to Reviewer Af9P (Part I)\", \"comment\": \"Thank you very much for carefully reviewing our paper and providing constructive comments and suggestions, which have helped improve the work a lot. Our response is presented as follows:\\n\\n**Q1: a straightforward extension from diffusion-based single-agent RL? with single-agent states and actions replaced by global observations and actions.**\", \"a1\": \"Our method is the first to integrate a diffusion model for data generation with Trajectory Stitching, enhancing coordination in multi-agent reinforcement learning (MARL) and addressing challenges beyond single-agent RL[1]. While both approaches aim to improve sample efficiency, MARL presents unique difficulties due to complex agent interactions. Specifically, we leverage the diffusion model's generative capabilities to handle two critical types of trajectory data imbalances: **temporal**, concerning relationships across timesteps, and **spatial**, focusing on interactions among agents. These imbalances complicate data representation and utilization, hindering MARL progress.\\n\\nTo address these issues, we propose two innovations. A bidirectional dynamics constraint ensures generated trajectories align with environmental dynamics. An offline credit assignment technique identifies and optimizes underperforming agents within trajectory segments. Experimental results show that MADiTS achieves superior sample efficiency across diverse MARL tasks, effectively tackling data imbalance and coordination challenges.\\n\\n**Q2: How does this approach compare to MADiff.**\", \"a2\": \"Our method MADiTS differs from MADiff in addressing sample efficiency challenges in MARL. MADiTS focuses on data augmentation by combining a diffusion model with dynamic detection and credit assignment techniques for offline MARL policy training. In contrast, MADiff primarily uses a diffusion model for trajectory prediction in action planning. Specifically, MADiTS specifically targets data imbalance issues in MARL, while MADiff aims to enhance learning efficiency in standard offline MARL settings. To emphasize these distinctions, we compare MADiTS with MADiff across various environments. The results, summarized in the table below, show that MADiTS outperforms MADiff and other baselines, demonstrating its superior effectiveness in handling data imbalance challenges. We also include performance of other MARL algorithms on the imbalanced dataset, with detailed results presented in the *Original* column.\\n\\n| Envs | Algs | Original | | MADiff | | MADiTS | |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| | | exp-m | exp-s | exp-m | exp-s | exp-m | exp-s |\\n| CN | BC | 17.27 \\u00b1 3.66 | 9.03 \\u00b1 3.40 | 40.44 \\u00b1 3.75 | 30.24 \\u00b1 4.82 | **43.44 \\u00b1 6.11** | **37.82 \\u00b1 4.45** |\\n| | OMIGA | -2.27 \\u00b1 57.07 | -11.60 \\u00b1 58.90 | 19.26 \\u00b1 67.68 | 7.27 \\u00b1 66.89 | **23.02 \\u00b1 69.69** | **22.91 \\u00b1 44.94** |\\n| | CFCQL | -33.40 \\u00b1 44.28 | -56.44 \\u00b1 40.38 | 30.08 \\u00b1 12.97 | 21.88 \\u00b1 10.71 | **39.57 \\u00b1 16.14** | **28.60 \\u00b1 20.94** |\\n| PP | BC | 48.49 \\u00b1 4.69 | 49.21 \\u00b1 3.90 | 49.55 \\u00b1 6.50 | 49.93 \\u00b1 3.74 | **54.85 \\u00b1 4.23** | **55.50 \\u00b1 4.28** |\\n| | OMIGA | 37.90 \\u00b1 25.34 | 26.73 \\u00b1 40.67 | 47.48 \\u00b1 20.97 | 58.44 \\u00b1 4.76 | **63.02 \\u00b1 3.40** | **63.71 \\u00b1 5.67** |\\n| | CFCQL | 45.03 \\u00b1 4.62 | 28.88 \\u00b1 6.29 | 45.50 \\u00b1 6.76 | 30.04 \\u00b1 7.02 | **47.38 \\u00b1 3.74** | **32.25 \\u00b1 10.98** |\\n| World | BC | 47.29 \\u00b1 3.00 | 48.30 \\u00b1 5.96 | 50.79 \\u00b1 2.62 | 51.55 \\u00b1 4.96 | **54.25 \\u00b1 4.35** | **52.84 \\u00b1 5.29** |\\n| | OMIGA | 54.23 \\u00b1 6.20 | 52.92 \\u00b1 5.64 | 49.55 \\u00b1 24.99 | 48.26 \\u00b1 24.75 | **57.35 \\u00b1 5.89** | **58.59 \\u00b1 9.32** |\\n| | CFCQL | 28.59 \\u00b1 1.90 | 18.14 \\u00b1 18.16 | 28.63 \\u00b1 10.14 | 34.35 \\u00b1 6.91 | **29.62 \\u00b1 10.91** | **39.36 \\u00b1 1.91** |\\n| Average | | 27.01 | 18.35 | 40.14 | 36.88 | **45.83** | **43.50** |\\n| 2m_vs_1z | BC | 0.03 \\u00b1 0.06 | 0.03 \\u00b1 0.06 | 0.30 \\u00b1 0.18 | 0.30 \\u00b1 0.18 | **0.35 \\u00b1 0.20** | **0.35 \\u00b1 0.20** |\\n| | OMIGA | 0.59 \\u00b1 0.28 | 0.59 \\u00b1 0.28 | 0.97 \\u00b1 0.04 | 0.97 \\u00b1 0.04 | **0.98 \\u00b1 0.02** | **0.98 \\u00b1 0.02** |\\n| | CFCQL | 0.72 \\u00b1 0.24 | 0.72 \\u00b1 0.24 | 0.92 \\u00b1 0.05 | 0.92 \\u00b1 0.05 | **0.94 \\u00b1 0.05** | **0.94 \\u00b1 0.05** |\\n| 3m | BC | 0.42 \\u00b1 0.05 | 0.38 \\u00b1 0.20 | 0.44 \\u00b1 0.18 | 0.36 \\u00b1 0.15 | **0.51 \\u00b1 0.19** | **0.43 \\u00b1 0.18** |\\n| | OMIGA | 0.93 \\u00b1 0.05 | 0.90 \\u00b1 0.03 | **1.00 \\u00b1 0.00** | 0.94 \\u00b1 0.03 | **1.00 \\u00b1 0.00** | **0.96 \\u00b1 0.03** |\\n| | CFCQL | 0.90 \\u00b1 0.06 | 0.80 \\u00b1 0.08 | 0.89 \\u00b1 0.05 | 0.81 \\u00b1 0.09 | **0.94 \\u00b1 0.07** | **0.89 \\u00b1 0.06** |\\n| 2s3z | BC | 0.73 \\u00b1 0.11 | 0.68 \\u00b1 0.14 | 0.75 \\u00b1 0.06 | 0.67 \\u00b1 0.05 | **0.78 \\u00b1 0.06** | **0.69 \\u00b1 0.26** |\\n| | OMIGA | 0.86 \\u00b1 0.16 | 0.67 \\u00b1 0.05 | **1.00 \\u00b1 0.00** | 0.71 \\u00b1 0.15 | **1.00 \\u00b1 0.00** | **0.76 \\u00b1 0.19** |\\n| | CFCQL | 0.73 \\u00b1 0.15 | 0.66 \\u00b1 0.14 | 0.80 \\u00b1 0.09 | 0.63 \\u00b1 0.15 | **0.92 \\u00b1 0.02** | **0.68 \\u00b1 0.26** |\\n| Average | | 0.65 | 0.60 | 0.78 | 0.70 | **0.82** | **0.74** |\"}", "{\"summary\": \"The paper presents a novel data augmentation method, MADiTS, to generate high-quality multi-agent trajectories for offline policy learning. A multi-agent trajectory diffusion model is trained to generate augmented future trajectories conditioned on the first-step joint observations. Multiple techniques are proposed to improve the quality of augmented trajectories. First, a forward dynamics model is used together with the inverse dynamics model to filter out dyanmic inconsist segments. Then, the interagted gradient is adopted to determine underperforming agents, and regenerating those agents' trajectories while keeping others' fixed. Experimental results on MPE and SMAC datasets demonstrate superior performence of MADiTS over baseline augmentation methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n\\n2. The idea of using the interagted gradient to find underperforming agents is novel to me, and I believe it might have border potential in multi-agent learning.\\n\\n3. The experiment results are impressive, achieving SOTA performance on all datasets.\", \"weaknesses\": \"1. The experiments are limited to environments with a small number of agents (up to five). The effectiveness of MADiTS on datasets with a larger number of agents has not been demonstrated.\\n\\n2. Given the known issues [1] with SMAC (v1), why were results not included for SMACv2?\\n\\n3. Unlike DiffStitch, which conditions on both the first and last trajectory segments, the diffusion model in MADiTS generates the subsequent trajectory based solely on the first state, which makes the naming of \\\"Stitch\\\" a little confusing.\\n\\n4. A key highlight of this paper is the use of integrated gradients to identify underperforming agents. However, this part lacks sufficient analysis. It would be beneficial to quantitatively demonstrate a general correlation between the value of integrated gradients and underperformance degrees across different environments.\\n\\n5. I have doubts about the extent to which regeneration can improve the overall reward. The multi-agent diffusion model learns from the joint trajectory distribution, meaning it is constrained by the combination of cooperative capabilities existed in the training data. Suppose, in a two-agent environment (agent 0 and agent 1), the training set includes two joint trajectories, $\\\\tau_1$ and $\\\\tau_2$. In $\\\\tau_1$, agent 0 significantly contributes to the team reward, while agent 1\\u2019s contribution is minimal. In $\\\\tau_2$, the situation is reversed. Now, suppose the model\\u2019s first generation produces a trajectory where agent 0 has high contribution and agent 1 has low contribution. If we successfully use integrated gradients to identify agent 1 and attempt to condition on agent 0\\u2019s trajectory to regenerate agent 1\\u2019s trajectory, the model should be unable to produce a high-contribution trajectory for agent 1, as the training dataset lacks joint trajectories where both agents have high contributions. Therefore, in this case, MADiTS may not effectively combine individual trajectories to yield better joint trajectories. The authors should discuss similar cases, clarifying the method\\u2019s limitations and maybe requirements for the training dataset.\\n\\n[1] Ellis, Benjamin, et al. \\\"Smacv2: An improved benchmark for cooperative multi-agent reinforcement learning.\\\" *Advances in Neural Information Processing Systems* 36 (2024).\", \"update\": \"Since the authors' responses have adequately addressed my concerns, I recommend accepting the paper.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Dear Reviewer bitr, are our responses address your questions?\", \"comment\": \"Dear Reviewer bitr:\\n\\nWe thank you again for your comments and hope our responses could address your questions. As the response system will end in four days, please let us know if we missed anything. More questions on our paper are always welcomed.\\n\\nSincerely yours,\\n\\nAuthors of Paper10913\"}", "{\"comment\": \"Thank you to the authors for making a serious effort to address my concerns. I believe the additional experiments provide better clarity on the contributions. I am now more positive about the paper and will increase my rating after seeing the authors updating the paper with the additional experiments and discussions.\"}", "{\"title\": \"Dear Reviewer onPK, are our responses address your questions?\", \"comment\": \"Dear Reviewer onPK:\\n\\nWe thank you again for your comments and hope our responses could address your questions. As the response system will end in four days, please let us know if we missed anything. More questions on our paper are always welcomed.\\n\\nSincerely yours,\\n\\nAuthors of Paper10913\"}", "{\"comment\": \"Thank you for sharing your time in evaluating our paper, and supporting our community.\"}", "{\"comment\": \"Thank you for your time in our paper, and supporting the community.\"}", "{\"comment\": \"I appreciate the authors' detailed responses and the additional experimental results. The analysis of integrated gradients in multi-agent learning is well-founded. I suggest that the authors include a discussion on the potential limitations related to Q5.\\n\\nSince the authors' responses have adequately addressed my concerns, I recommend accepting the paper.\"}", "{\"title\": \"Response to Reviewer Af9P (Part II)\", \"comment\": \"**Q3: Results on more challenging MARL tasks like MAMuJoCo and SMACv2**\", \"a3\": \"Our method enhances sample efficiency across a range of offline MARL approaches and benchmarks. Beyond the commonly used SMAC and MPE environments, we conducted experiments on **terran_5_vs_5** and **zerg_5_vs_5** maps from SMACv2[2] and the **4ant** environment from MAMuJoCo[3]. As shown in the table below, MADiTS consistently improves the sample efficiency of algorithms such as BC, OMIGA, and CFCQL under varying data imbalance settings. These results highlight its robustness and general applicability.\\n\\n| Envs | Algs | Balanced | Original | | MA-MBTS | | MADiff | | MADiTS | |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| | | exp | exp-m | exp-s | exp-m | exp-s | exp-m | exp-s | exp-m | exp-s |\\n| terran_5_vs_5 | BC | 0.55 \\u00b1 0.08 | 0.46 \\u00b1 0.02 | 0.38 \\u00b1 0.27 | 0.46 \\u00b1 0.05 | 0.41 \\u00b1 0.13 | 0.51 \\u00b1 0.03 | 0.42 \\u00b1 0.08 | **0.54 \\u00b1 0.08** | **0.46 \\u00b1 0.09** |\\n| | OMIGA | 0.59 \\u00b1 0.06 | 0.55 \\u00b1 0.15 | 0.54 \\u00b1 0.11 | 0.58 \\u00b1 0.08 | 0.56 \\u00b1 0.04 | 0.61 \\u00b1 0.06 | 0.57 \\u00b1 0.04 | **0.70 \\u00b1 0.06** | **0.61 \\u00b1 0.08** |\\n| | CFCQL | 0.65 \\u00b1 0.09 | 0.54 \\u00b1 0.12 | 0.49 \\u00b1 0.09 | 0.54 \\u00b1 0.10 | 0.49 \\u00b1 0.19 | 0.53 \\u00b1 0.06 | 0.50 \\u00b1 0.07 | **0.55 \\u00b1 0.11** | **0.52 \\u00b1 0.06** |\\n| zerg_5_vs_5 | BC | 0.32 \\u00b1 0.16 | 0.30 \\u00b1 0.19 | 0.27 \\u00b1 0.13 | 0.28 \\u00b1 0.08 | 0.29 \\u00b1 0.13 | 0.36 \\u00b1 0.07 | 0.33 \\u00b1 0.08 | **0.40 \\u00b1 0.06** | **0.38 \\u00b1 0.06** |\\n| | OMIGA | 0.44 \\u00b1 0.09 | 0.34 \\u00b1 0.05 | 0.31 \\u00b1 0.09 | 0.36 \\u00b1 0.11 | 0.34 \\u00b1 0.07 | 0.36 \\u00b1 0.13 | 0.33 \\u00b1 0.11 | **0.38 \\u00b1 0.07** | **0.37 \\u00b1 0.13** |\\n| | CFCQL | 0.55 \\u00b1 0.07 | 0.49 \\u00b1 0.11 | 0.25 \\u00b1 0.09 | 0.47 \\u00b1 0.14 | 0.31 \\u00b1 0.09 | 0.46 \\u00b1 0.07 | 0.34 \\u00b1 0.10 | **0.50 \\u00b1 0.08** | **0.42 \\u00b1 0.15** |\\n| Average | | 0.51 | 0.44 | 0.37 | 0.45 | 0.40 | 0.47 | 0.42 | **0.51** | **0.46** |\\n| 4ant | BC | 39.43 \\u00b1 6.09 | 25.31 \\u00b1 8.15 | 10.48 \\u00b1 12.58 | 19.75 \\u00b1 3.68 | 10.58 \\u00b1 3.38 | 28.30 \\u00b1 8.60 | 20.29 \\u00b1 6.57 | **31.66 \\u00b1 5.10** | **21.89 \\u00b1 6.23** |\\n| | OMIGA | 52.30 \\u00b1 5.71 | 33.12 \\u00b1 10.60 | 31.76 \\u00b1 10.99 | 40.00 \\u00b1 8.72 | 30.61 \\u00b1 8.46 | 40.50 \\u00b1 6.74 | 34.35 \\u00b1 4.47 | **48.92 \\u00b1 6.57** | **43.09 \\u00b1 8.15** |\\n| | CFCQL | 58.87 \\u00b1 6.94 | 38.78 \\u00b1 7.85 | 7.10 \\u00b1 9.85 | 32.41 \\u00b1 5.93 | 6.37 \\u00b1 5.79 | 41.98 \\u00b1 7.12 | 24.03 \\u00b1 8.07 | **48.43 \\u00b1 5.91** | **35.28 \\u00b1 6.86** |\\n| Average | | 50.20 | 32.40 | 16.44 | 30.72 | 15.85 | 36.92 | 26.22 | **43.00** | **33.42** |\"}", "{\"comment\": \"Thank you for your valuable contribution to our community. We appreciate your positive feedback on our paper and are grateful for your insightful suggestions regarding the regeneration problem, which have greatly enhanced our work. We will incorporate these suggestions into the discussion accordingly.\"}", "{\"title\": \"Response to Reviewer Af9P (Part III)\", \"comment\": \"**Q4: Additional citations, discussions, and experimental comparisons to other data-augmentation techniques in offline MARL**\", \"a4\": \"Our method addresses sample efficiency issues by using diffusion models to perform trajectory stitching for data augmentation in temporally and spatially imbalanced datasets. This approach enables offline MARL algorithms to achieve better performance by learning from the enhanced dataset. Notably, this is an unexplored area in existing research and is investigated for the first time in this paper, setting it apart from the mentioned works.\\n\\n- SIT[4] is an offline MARL method that focuses on learning an effective joint policy from agent-wise imbalanced datasets, primarily addressing spatial imbalance. It achieves this through an attention-based reward decomposition network for offline credit assignment, which identifies high-quality individual trajectories for sharing among agents, and a graph attention network for conservative policy training. In contrast, our method MADiTS prioritizes improving sample efficiency through a data-oriented augmentation pipeline. By employing a diffusion model-based trajectory stitching mechanism, MADiTS enhances dataset quality to address both temporal and spatial imbalances. Unlike SIT, which directly learns policies from imbalanced datasets, MADiTS generates augmented datasets that can be flexibly used by any offline MARL algorithm, providing enhanced flexibility and modularity.\\n- DOM2[5] is a diffusion model-based offline MARL algorithm that aims to enhance policy expressiveness and diversity. It achieves notable improvements in performance, generalization, and data efficiency through an accelerated solver for diffusion-based policy construction and a policy regularizer, while also scaling up dataset size to boost policy learning. DOM2 excels on balanced datasets, outperforming traditional conservatism-based offline MARL methods. In contrast, MADiTS targets the challenges posed by imbalanced datasets. By addressing temporal and spatial imbalances, MADiTS improves dataset quality, enabling other offline MARL algorithms to achieve better performance and demonstrating its effectiveness in handling imbalanced scenarios.\\n- MADiff[6] is a diffusion model-based offline MARL algorithm designed to predict future joint actions for decision-making by modeling teammate behaviors. It uses an attention-based diffusion model to capture joint observation sequences and infer actions for planning, achieving strong performance on balanced datasets in standard offline MARL settings. In contrast, MADiTS specifically targets the challenges of data imbalance in offline MARL, focusing on improving sample efficiency through innovative data augmentation techniques. While MADiff enhances learning efficiency in balanced scenarios, MADiTS addresses the unique difficulties posed by imbalanced datasets. To highlight these distinctions, we compared the performance of MADiff above, extended for data augmentation, with MADiTS across various environments. The results demonstrate MADiTS\\u2019s superior effectiveness in tackling data imbalance challenges.\\n\\nWe provide a comparison of our method with MADiff across various environments in Table 1 and Table 4 in the revised manuscript. Additionally, key differences with other related methods on relevant topics are discussed in detail in Appendix B.\\n\\n**Q5: Can these methods be applied to cooperative-competitive settings?**\", \"a5\": \"Our paper mainly addresses the cooperative multi-agent reinforcement learning (MARL) problem[7], a widely studied setup where all agents share a global reward. Through extensive experiments across various offline MARL environments, we demonstrate that MADiTS significantly improves MARL performance. While our current focus is on cooperative settings, the method can be naturally extended to cooperative-competitive scenarios by equipping each team with its own MADiTS model. In this setup, teams can independently learn model parameters using tailored buffers for training. We thank the reviewer for highlighting this intriguing direction and plan to explore it further in future work.\"}", "{\"metareview\": \"his paper introduces MADiTS, a novel diffusion-based trajectory stitching method for offline Multi-Agent Reinforcement Learning (MARL), addressing temporal and spatial data imbalances. The proposed method effectively generates high-quality trajectories by integrating a diffusion model, bidirectional dynamics constraints, and offline credit assignment to identify and enhance underperforming agents. Strengths include the innovative use of trajectory augmentation, robust experimental results demonstrating state-of-the-art performance across diverse MARL tasks, and clear explanations of its contributions. While computational cost analysis and extensions to large-scale settings are areas for future work, the thorough rebuttal and additional experiments addressing scalability, diversity, and comparisons with baselines like MADiff and DOM2 solidify the paper's contributions. These merits strongly support its acceptance.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, reviewers raised concerns about scalability, diversity, computational cost, and comparisons with related methods like MADiff, DOM2, and SIT. The authors provided extensive responses, including new experiments on larger MARL settings (e.g., 12m map in SMAC), diversity analysis using ADE metrics, and computational cost evaluations showing competitive performance. They clarified MADiTS's unique contributions and addressed vague descriptors in the original manuscript. These clarifications and the addition of comparative baselines significantly strengthened the paper's case, leading to improved reviewer scores and consensus on its acceptance.\"}", "{\"title\": \"Dear Reviewer Af9P, are our responses address your questions?\", \"comment\": \"Dear Reviewer Af9P:\\n\\nWe thank you again for your comments and hope our responses could address your questions. As the response system will end in four days, please let us know if we missed anything. More questions on our paper are always welcomed. \\n\\nSincerely yours,\\n\\nAuthors of Paper10913\"}", "{\"summary\": \"The paper proposes the MADiTS method to address the imbalance of offline datasets in both temporal and spatial dimensions. Specifically, it uses diffusion models to generate trajectory segments, then employs bidirectional environmental dynamics constraint to ensure the consistency of trajectories with environmental dynamics, and finally utilizes offline credit assignment technique to identify and optimize the behavior of underperforming agents in the generated segments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe overall structure of the paper is complete, the content is substantial, and the experimental evidence is convincing.\\n2.\\tGenerating trajectories using diffusion models and ensuring the generated segments conform to environmental dynamics with bidirectional dynamic constraints is similar to learning a world model and using it to verify the rationality of trajectories.\\n3.\\tAs a data augmentation method, MADiTS is compatible with any offline MARL algorithm and has good versatility.\", \"weaknesses\": \"1.\\tThe paper uses several certain threshold hyperparameters (environment-independent), which may make the method too sensitive to the setting of hyperparameters. Moreover, the paper does not conduct experimental analysis on methods with different hyperparameters.\\n2.\\tMADiTS involves multiple models and steps, which may lead to computational complexity. Additionally, the paper does not include experimental analysis on this aspect.\", \"questions\": \"1.\\tHow does the proposed method address the imbalance of the dataset in terms of time and space? For example, which technique solves the time imbalance problem, and which solves the spatial imbalance problem?\\n2.\\tHow are environment-independent hyperparameters like $\\\\delta_{recon}$ determined? What effects do different values have?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer qWY9 (Part I)\", \"comment\": \"Thank you very much for carefully reviewing our paper and providing constructive comments and suggestions. In response, we have conducted additional experiments and clarified the points of confusion. The detailed response is presented as follows:\\n\\n**Q1: The effectiveness of MADiTS on datasets with a larger number of agents.**\", \"a1\": \"MADiTS demonstrates enhanced sample efficiency across various methods, even as the number of agents increases. To validate its effectiveness, we conducted experiments on the 12m map from SMAC, with 12 marine agents and 12 marine enemies. The results, shown in the table below, indicate that MADiTS outperforms the baseline method MA-MBTS and achieves competitive performance compared to MADiff, highlighting its broad applicability. Addressing scalability in large-scale MARL settings with hundreds or thousands of agents[1], possibly through techniques like agent grouping, is a promising direction for future work.\\n\\n| Envs | Algs | Balanced | Original | | MA-MBTS | | MADiff | | MADiTS | |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| | | exp | exp-m | exp-s | exp-m | exp-s | exp-m | exp-s | exp-m | exp-s |\\n| 12m | BC | 0.95 \\u00b1 0.03 | 0.83 \\u00b1 0.11 | 0.63 \\u00b1 0.28 | 0.81 \\u00b1 0.11 | 0.56 \\u00b1 0.22 | 0.85 \\u00b1 0.12 | **0.65 \\u00b1 0.21** | **0.88 \\u00b1 0.07** | **0.65 \\u00b1 0.12** |\\n| | OMIGA | 0.97 \\u00b1 0.03 | 0.94 \\u00b1 0.02 | 0.85 \\u00b1 0.02 | 0.95 \\u00b1 0.03 | 0.85 \\u00b1 0.03 | **0.96 \\u00b1 0.03** | 0.86 \\u00b1 0.04 | 0.95 \\u00b1 0.02 | **0.87 \\u00b1 0.02** |\\n| | CFCQL | 0.90 \\u00b1 0.04 | 0.78 \\u00b1 0.09 | 0.50 \\u00b1 0.11 | 0.70 \\u00b1 0.11 | 0.46 \\u00b1 0.10 | 0.79 \\u00b1 0.07 | 0.53 \\u00b1 0.09 | **0.80 \\u00b1 0.05** | **0.63 \\u00b1 0.08** |\\n| Average | | 0.94 | 0.85 | 0.66 | 0.82 | 0.62 | 0.86 | 0.68 | **0.87** | **0.71** |\\n\\n**Q2: Results on SMACv2**\", \"a2\": \"Our method enhances sample efficiency across various offline MARL approaches and benchmarks. In addition to the commonly used SMAC and MPE, we conducted experiments on the terran_5_vs_5 and zerg_5_vs_5 maps from SMACv2. As shown in the following table, MADiTS consistently improves the sample efficiency of algorithms such as BC, OMIGA, and CFCQL under different data imbalance settings, demonstrating its general applicability.\\n\\n| Envs | Algs | Balanced | Original | | MA-MBTS | | MADiff | | MADiTS | |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| | | exp | exp-m | exp-s | exp-m | exp-s | exp-m | exp-s | exp-m | exp-s |\\n| terran_5_vs_5 | BC | 0.55 \\u00b1 0.08 | 0.46 \\u00b1 0.02 | 0.38 \\u00b1 0.27 | 0.46 \\u00b1 0.05 | 0.41 \\u00b1 0.13 | 0.51 \\u00b1 0.03 | 0.42 \\u00b1 0.08 | **0.54 \\u00b1 0.08** | **0.46 \\u00b1 0.09** |\\n| | OMIGA | 0.59 \\u00b1 0.06 | 0.55 \\u00b1 0.15 | 0.54 \\u00b1 0.11 | 0.58 \\u00b1 0.08 | 0.56 \\u00b1 0.04 | 0.61 \\u00b1 0.06 | 0.57 \\u00b1 0.04 | **0.70 \\u00b1 0.06** | **0.61 \\u00b1 0.08** |\\n| | CFCQL | 0.65 \\u00b1 0.09 | 0.54 \\u00b1 0.12 | 0.49 \\u00b1 0.09 | 0.54 \\u00b1 0.10 | 0.49 \\u00b1 0.19 | 0.53 \\u00b1 0.06 | 0.50 \\u00b1 0.07 | **0.55 \\u00b1 0.11** | **0.52 \\u00b1 0.06** |\\n| zerg_5_vs_5 | BC | 0.32 \\u00b1 0.16 | 0.30 \\u00b1 0.19 | 0.27 \\u00b1 0.13 | 0.28 \\u00b1 0.08 | 0.29 \\u00b1 0.13 | 0.36 \\u00b1 0.07 | 0.33 \\u00b1 0.08 | **0.40 \\u00b1 0.06** | **0.38 \\u00b1 0.06** |\\n| | OMIGA | 0.44 \\u00b1 0.09 | 0.34 \\u00b1 0.05 | 0.31 \\u00b1 0.09 | 0.36 \\u00b1 0.11 | 0.34 \\u00b1 0.07 | 0.36 \\u00b1 0.13 | 0.33 \\u00b1 0.11 | **0.38 \\u00b1 0.07** | **0.37 \\u00b1 0.13** |\\n| | CFCQL | 0.55 \\u00b1 0.07 | 0.49 \\u00b1 0.11 | 0.25 \\u00b1 0.09 | 0.47 \\u00b1 0.14 | 0.31 \\u00b1 0.09 | 0.46 \\u00b1 0.07 | 0.34 \\u00b1 0.10 | **0.50 \\u00b1 0.08** | **0.42 \\u00b1 0.15** |\\n| Average | | 0.51 | 0.44 | 0.37 | 0.45 | 0.40 | 0.47 | 0.42 | **0.51** | **0.46** |\\n\\n**Q3: Confusion of the term \\u201cstitching\\u201d.**\", \"a3\": \"In the context of our work, *stitching* refers to the **head-to-tail concatenation of two trajectory segments** to create a complete trajectory. This definition aligns with certain interpretations in the literature, where *stitching* is used metaphorically to describe various operations on trajectories. Existing works in the Offline RL domain employing *stitching* primarily focus on three main approaches:\\n\\n1. **Head-to-tail concatenation of two trajectory segments** (e.g., SSD[2]): In this approach, each trajectory segment is generated to ensure high quality, and subsequent segments are initiated from the endpoints of the preceding ones, achieving head-to-tail concatenation.\\n2. **Creating new transitions or sub-trajectories as bridges between existing trajectories** (e.g., DiffStitch[3], MBTS[4]): This involves generating a \\\"bridge\\\" to connect trajectory A and trajectory B, resulting in a new optimal trajectory C, structured as A + bridge + B.\\n3. **Describing the ability to learn optimal policies from suboptimal data** (e.g., QDT[5], EDT[6], WT[7]): Unlike the previous two, this interpretation uses *stitching* as a metaphor for the ability to extract optimal policies from suboptimal trajectories, without explicitly concatenating or modifying trajectories.\\n\\nThank you for bringing this point to our attention. We have further clarified the definition and application of stitching in the revised manuscript.\"}" ] }
EpmbH6DpJI
Robust Thompson Sampling Algorithms Against Reward Poisoning Attacks
[ "Yinglun Xu", "Zhiwei Wang", "Gagandeep Singh" ]
Thompson sampling is one of the most popular learning algorithms for online sequential decision-making problems and has rich real-world applications. However, current Thompson sampling algorithms are limited by the assumption that the rewards received are uncorrupted, which may not be true in real-world applications where adversarial reward poisoning exists. To make Thompson sampling more reliable, we want to make it robust against adversarial reward poisoning. The main challenge is that one can no longer compute the actual posteriors for the true reward, as the agent can only observe the rewards after corruption. In this work, we solve this problem by computing pseudo-posteriors that are less likely to be manipulated by the attack. We propose robust algorithms based on Thompson sampling for the popular stochastic and contextual linear bandit settings in both cases where the agent is aware or unaware of the budget of the attacker. We theoretically show that our algorithms guarantee near-optimal regret under any attack strategy.
[ "Robust Bandit Algorithm", "Thompson Sampling" ]
Reject
https://openreview.net/pdf?id=EpmbH6DpJI
https://openreview.net/forum?id=EpmbH6DpJI
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zewxkiPaS1", "vNZIzoLGlW", "iAA81YqYDH", "gWlNxf8HKQ", "gBjXSm98Jm", "fgPKHJ8zA4", "c5d5dqkCim", "Y00MUm4yh6", "UZTbZKKDnA", "U8uUloAUrn", "H0nFm748g8", "Gk4ofhNpZq", "ETfuRAVtU1", "A8oYlV4mYh", "9Qh524RbR1", "1OSHFwyJSa" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "meta_review", "official_comment" ], "note_created": [ 1732944065221, 1737524181307, 1733056316592, 1732395845018, 1732395642580, 1732622342720, 1732396088864, 1730715935805, 1732654447411, 1730459427031, 1732656947075, 1732656995162, 1730597812613, 1730889490431, 1734788464568, 1732395499686 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12316/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12316/Reviewer_CZzm" ], [ "ICLR.cc/2025/Conference/Submission12316/Authors" ], [ "ICLR.cc/2025/Conference/Submission12316/Authors" ], [ "ICLR.cc/2025/Conference/Submission12316/Reviewer_MHci" ], [ "ICLR.cc/2025/Conference/Submission12316/Authors" ], [ "ICLR.cc/2025/Conference/Submission12316/Reviewer_nYyd" ], [ "ICLR.cc/2025/Conference/Submission12316/Reviewer_nYyd" ], [ "ICLR.cc/2025/Conference/Submission12316/Reviewer_CZzm" ], [ "ICLR.cc/2025/Conference/Submission12316/Authors" ], [ "ICLR.cc/2025/Conference/Submission12316/Authors" ], [ "ICLR.cc/2025/Conference/Submission12316/Reviewer_MHci" ], [ "ICLR.cc/2025/Conference/Submission12316/Reviewer_skgH" ], [ "ICLR.cc/2025/Conference/Submission12316/Area_Chair_LboR" ], [ "ICLR.cc/2025/Conference/Submission12316/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thanks again for your valuable comments. As the discussion period is ending, we'd love to know if our response addresses your concerns.\\n\\nIn addition to our previous response, we find that the work mentioned in your review [1] has already proved that the Thompsom sampling algorithm for MAB is differentially private as-is. At the same time, it is not adversarially robust, as mentioned in our work. This clearly demonstrates that adversarial robustness is not a special case of differentially private online learning, and a differentially private algorithm may not be robust against adversarial data poisoning attacks.\\n\\n[1]: Ou, Tingting, Rachel Cummings, and Marco Avella. \\\"Thompson Sampling Itself is Differentially Private.\\\" International Conference on Artificial Intelligence and Statistics. PMLR, 2024.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your response and for addressing some of my comments. The overall presentation of the paper has slightly improved. However, the revised version of the paper now exceeds the 12-page limit, which does not comply with the conference\\u2019s paper length requirements. It is unclear if this version should be considered for review.\\n\\nMore importantly, the main weakness of the paper\\u2014namely, the enormous multiplicative constant in the regret analysis, despite the narrow setting of Gaussian linear bandits with Gaussian priors\\u2014remains. Additionally, the clarification provided by the authors regarding the expected regret\\u2019s decrease as the adversarial budget increases in Figure 3 is not satisfying.\\n\\nFor all the above reasons, I maintain my score unchanged.\"}", "{\"comment\": \"Thank you for your constructive comments and suggestions. Based on the reviews, the revision has been uploaded, and the most important changes are highlighted in blue. Below is our response to your concerns.\", \"q1\": \"While I found the results interesting, they are also somewhat expected, given the findings from prior work on poisoning attacks and corruption robustness in bandits\", \"a1\": \"One can expect that making Thompson sampling robust is feasible using the techniques in our work, but it is hard to know exactly how robust a Thompson sampling algorithm can be and if it is possible to achieve near-optimal guarantees in the cases where the corruption level $C$ is known or unknown to the agent. Our work shows how to make near-optimal robust Thompson sampling algorithms and proves that they can be near-optimal.\", \"q2\": \"The experiments related to the stochastic MAB setting do not include a corruption-robust baseline.\", \"a2\": \"We have included a robust baseline for the stochastic MAB setting in the revision. The baseline we call `Robust UCB\\u2019 is essentially the standard UCB algorithm with an additional bonus term $C/k_i(t)$ to arm $i$ to compensate for the potential influence of data corruption, where $C$ is the corruption level, and $k_i(t)$ is the number of times arm $i$ being selected at time $t$. Such an idea is also mentioned in [1], and the details can be found in the revision in section 6.1. Our empirical results show that the performance of our robust Thompson sampling is similar to that of the robust UCB algorithm. Thompson sampling has several advantages over the UCB algorithm, as mentioned in our introduction section. Our empirical results further show that the robust Thompson sampling algorithms are competitive against robust UCB/OFUL algorithms.\", \"q3\": \"While I believe that the paper is overall well-written, the presentation could be improved and polished.\", \"a3\": \"We have fixed the typos and plots in the revision.\", \"q4\": \"it would be helpful if the authors elaborated further on their two remarks:\\n\\u2026 the attacker can apply less influence on the estimator by corrupting such data...,\\n...variance of the posterior distribution is limited due to the weighted estimator under reward poisoning..\", \"a4\": \"We have changed the two remarks in the revision:\\n\\n1. \\u2018The key is that it assigns less weight $w_t$ to the data with a `large' context $w_{t}=\\\\min (\\\\{1, \\\\gamma /\\\\left\\\\||x_{i(t)}(t)\\\\right\\\\||_{B(t)^{-1}}\\\\})$\\nso that its estimation is less sensitive to data corruption in these cases.\\u2019\\n\\n2. \\u2018The key reason is that the posterior distribution computed by the weighted estimator is less sensitive to any change in the rewards. So the agent is more robust against data corruption.\\u2019\\n\\n[1]: Lykouris, Thodoris, Vahab Mirrokni, and Renato Paes Leme. \\\"Stochastic bandits robust to adversarial corruptions.\\\" Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing. 2018.\"}", "{\"comment\": \"Thank you for your constructive comments and suggestions. Based on the reviews, the revision has been uploaded, and the most important changes are highlighted in blue. Below is our response to your concerns.\", \"q1\": \"However, comparing existing results requires finding each cited paper and going through them one by one. It would be better if a table was included with previous works\", \"a1\": \"We have included a table in the revision to show the comparison of the theoretical results between our algorithms and current state-of-the-art algorithms.\", \"q2\": \"typos: At the end of Sections 4 and 5, the paper (He et al., 2022) should be cited instead of (He et al., 2023)\", \"a2\": \"We have fixed them in the revision.\", \"q3\": \"What can be said about Bayesian regret?\", \"a3\": \"In the stochastic setting, we have demonstrated that for any parameter vector\\n$\\\\mu = (\\\\mu_1, \\\\cdots, \\\\mu_N) \\\\in \\\\Pi_{i=1}^{N} [0,1]$\\nthe expected regret is bounded by\\n$ \\\\sum_{i=1}^{N} \\\\left[ 72 \\\\left(e^{64} + 5 \\\\right) \\\\frac{\\\\ln \\\\left( T \\\\Delta_i^2 \\\\right)}{\\\\Delta_i} + 12C + \\\\frac{13}{\\\\Delta_i} + \\\\Delta_i \\\\right], $\\nwhere $\\\\Delta_i$ represents the sub-optimality gap for arm $i$. Therefore, the analysis can be divided into two cases: $\\\\Delta_i \\\\geq e \\\\sqrt{\\\\frac{N \\\\ln N}{T}}$ and $\\\\Delta_i < e \\\\sqrt{\\\\frac{N \\\\ln N}{T}}$. There exists a constant $k$(e.g. $k=144 \\\\left(e^{64} + 5 \\\\right) $) independent of $\\\\mu$ such that\\n$ \\\\mathbb{E}[\\\\mathcal{R}(T) \\\\mid \\\\mu] \\\\leq k(\\\\sqrt{N T \\\\ln N} + N C + N) $\\nholds for any $\\\\mu \\\\in \\\\Pi_{i=1}^{N} [0,1]$. Consequently, the Bayesian regret satisfies\\n$ \\\\mathbb{E}_{\\\\mu}[\\\\mathbb{E}[\\\\mathcal{R}(T) \\\\mid \\\\mu]] \\\\leq k(\\\\sqrt{N T \\\\ln N} + N C + N). $\\n\\nThis result establishes a uniform bound on the expected regret across all possible values of $\\\\mu$, ensuring the validity of our regret analysis in the Bayesian context.\"}", "{\"comment\": \"Thank you for the clarifications. I'm also happy to see that you've improved the revised version of the paper. Hence, I will keep my positive score.\"}", "{\"comment\": \"Thank you for your constructive comments and suggestions. Based on the reviews, the revision has been uploaded, and the most important changes are highlighted in blue. Below is our response to your concerns.\", \"q1\": \"The authors considered a very limited settings.\", \"a1\": \"We have made it clear in the title, abstract, and contribution that we only consider Gaussian bandits. It is common in Thompson sampling studies to focus on a representation case of priors such as Gaussian distributions. [1,2,3].\", \"q2\": \"The authors forgot to sum over all the possible N Arms\", \"a2\": \"In the proof of Theorem 4.1, we summed the regret upper bound over all possible N arms. We made a slight simplification in the summation notation($\\\\sum_{i}$), and we will use the complete form in the revised version($\\\\sum_{i=1}^{N}$). Specifically, as presented in the proof of Theorem 4.1:\\n\\\\begin{aligned}\\n &\\\\mathbb{E}[\\\\mathcal{R}(T)] = \\\\sum_{i} \\\\Delta_i \\\\mathbb{E}\\\\left[k_i(T)\\\\right] \\\\leq \\\\sum_{i}[72\\\\left(e^{64}+5\\\\right) \\\\frac {\\\\ln \\\\left(T \\\\Delta^2_i\\\\right)}{\\\\Delta_i} + 12C+\\\\frac{13}{ \\\\Delta_i}+\\\\Delta_i]\\n\\\\end{aligned}\\n\\nThe equality on the left-hand side holds because $\\\\mathbb{E}\\\\left[k_i(T)\\\\right]$ represents the expected number of times arm $i$ is pulled, and $\\\\Delta_i$ is the regret incurred each time arm $i$ is pulled. Therefore, $\\\\Delta_i \\\\mathbb{E}\\\\left[k_i(T)\\\\right]$ represents the expected regret due to pulling arm $i$. \\nThe inequality on the right-hand side follows by combining the bounds from Lemmas A.3 to A.5, where we have:\\n$$\\n\\\\begin{aligned}\\n&\\\\mathbb{E}\\\\left[k_i(T)\\\\right] \\\\leq 72\\\\left(e^{64}+4\\\\right)\\\\frac{\\\\ln \\\\left(T \\\\Delta_i^2\\\\right)}{\\\\Delta_i^2}+\\\\max [ \\\\frac{12C}{\\\\Delta_i}, \\\\frac{32 \\\\ln \\\\left(T \\\\Delta_i^2\\\\right)}{\\\\Delta_i^2} ]+\\\\frac{13}{\\\\Delta_i^2}+1 \\n\\\\end{aligned}\\n$$\\nBy multiplying by $\\\\Delta_i$ and summing over all $N$ arms, we then obtain the desired inequality on the right-hand side.\", \"q3\": \"Another concern regarding the regret bound is the enormous multiplicative constant\", \"a3\": \"We have noticed the problem. This is a motivation for our simulations. We empirically show that, in practice, the constant dependency of the regret is small. Add improvement\", \"q4\": \"the expected regret of TS seems not just close but almost exactly equal to the expected regret of UCB; it seems impossible to explain how the expected regret of the proposed algorithm can decrease when the adversarial budget increases\", \"a4\": \"We have explained the two observations in detail in the revision:\\n\\n\\u2018We notice that the regret of the two vulnerable algorithms under the attack are almost identical. The reason is that the attacker highlights a target arm. For the vulnerable algorithms, they will believe that the target arm is optimal and almost always take that arm during training. So their behavior and regret are very similar under the attack.\\u2019\\n\\n\\u2018We notice that when the corruption level is small, the regret of our algorithm decreases slightly as the corruption level increases. The reason is that the attacker tries to highlight a randomly chosen target arm, and the arm is not the one with the lowest reward. Therefore, when the corruption level is not high enough to mislead the agent, the agent will only take sub-optimal arms for a limited number of rounds, and it tends to take the target arm more often instead of the worse arms, making the total regret slightly lower.\\u2019\", \"q5\": \"the titles of the x-axes are almost not readable\", \"a5\": \"We have fixed the plots in the revision.\", \"q6\": \"The robust version of Thompson Sampling seems to outperform the original Thompson Sampling and UCB algorithm when the adversarial budget is 0.\", \"a6\": \"The robust and original Thompson sampling are identical when the adversarial budget is 0. In the plots, their regrets are also the same when the adversarial budget is 0. It is also not a surprise that in our setup, the Thompson sampling algorithm performs slightly better than the UCB algorithm when there is no attack.\", \"other_questions_and_suggestions_about_the_writing\": \"\", \"a\": \"We have improved the proof in the appendix and made them more clear in the revision. We clarify in the revision that the bars in the plots are the variance of the data. We add a readme file to the supplementary materials. We clarify that no generative AI tools have been used in our writing. We run an AI scan on the text mentioned in the review using GPT Zero, and it says that with 98% accuracy, the text is fully written by humans. We have improved our writing in the revision.\\n\\n[1]: Honda, Junya, and Akimichi Takemura. \\\"Optimality of Thompson sampling for Gaussian bandits depends on priors.\\\" Artificial Intelligence and Statistics. PMLR, 2014.\\n\\n[2]: Agrawal, Shipra, and Navin Goyal. \\\"Thompson sampling for contextual bandits with linear payoffs.\\\" International conference on machine learning. PMLR, 2013.\\n\\n[3]:Hu, Bingshan, and Nidhi Hegde. \\\"Near-optimal thompson sampling-based algorithms for differentially private stochastic bandits.\\\" Uncertainty in Artificial Intelligence. PMLR, 2022\"}", "{\"summary\": \"This paper proposes first near-optimal variants of Thompson sampling for stochastic bandit and contextual linear bandit problems with adversarial reward poisoning.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This is the first work providing variants of Thompson sampling for this class of problems.\\nProposed algorithms are near-optimal and, being based on Thompson sampling, they inherit the advantages of Thompson sampling over approaches based on optimism in face of uncertainty.\", \"weaknesses\": \"Previous works are mentioned in the introduction and related works section. However, comparing existing results requires finding each cited paper and going through them one by one. It would be better if a table was included with previous works.\", \"typos\": \"At the end of Sections 4 and 5, the paper (He et al., 2022) should be cited instead of (He et al., 2023).\", \"questions\": \"What can be said about Bayesian regret?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the response. I think including a discussion of Bayesian regret will improve the article. I will keep my positive score.\"}", "{\"summary\": \"This paper studies the performance of Thompson Sampling for stochastic Gaussian bandits and contextual Gaussian $d$-dimensional linear bandit problems with Gaussian prior. Starting from the prior distribution over the \\\"reward-parameter\\\", the Thompson Sampling algorithm works by randomly selecting actions according to their posterior probability of being optimal. More specifically, at each time step, it samples a \\\"reward-parameter\\\" estimate from the posterior distribution conditioned on the history and selects the optimal action for the sampled parameter estimate given the context.\\n\\nIn this work, the authors study the case where the rewards can be corrupted by an adversary with full knowledge of the \\\"reward-parameter\\\", the history, the current context and the random reward before attack. The adversary perturbs the reward with an additive perturbation. The sum of absolute perturbation, $C$, is referred to as the budget of the attack or the corruption level and is assumed to be finite. The authors propose two modifications of the Thompson Sampling to mitigate the effects of the adversarial attacks on the algorithm's performance. The first algorithm, developed for Gaussian stochastic bandits with Gaussian prior, uses an optimistic biased distribution over the reward-parameters to compensate for potential reward attacks. The authors claim that they have derived a bound of order $O(\\\\sqrt{NT \\\\log(N)} + N\\\\overline{C} + N)$ on the expected regret, where $N$ is the number of arms, $T$ is the number of time steps and $\\\\overline{C}$ is the \\\"robustness level\\\", a parameter of the algorithm that is assumed to be greater than $C$. It has to be noted that the constant term in the bounds is of the order $10^{29}$. \\n\\nThe second algorithm, developed for contextual Gaussian $d$-dimensional linear bandit problems with Gaussian prior, works by weighting the posteriors updates inversely proportional to the weighted 2-norm of the current context. The authors derive a probabilistic bound on the expected regret. \\n\\n\\nThe authors then perform experiments to demonstrate the performance of their modified Thompson Sampling algorithms. The first experiment compares the performance of their first algorithm against UCB and TS for Gaussian stochastic bandits with Gaussian priors. The second experiment compares the performance of their first algorithm against UCB and TS for contextual Gaussian $d$-dimensional linear bandit problems with Gaussian prior.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The main strength of the paper is to propose modifications to the Thompson Sampling algorithm to mitigate the effects of adversarial reward attacks.\", \"weaknesses\": [\"Although adapting the Thompson Sampling to adversarial reward attacks is an interesting idea, the paper suffers from several weaknesses listed below (the order does not reflect a hierarchy in the severity of the weaknesses).\", \"The first weakness concerns the considered settings. The authors considered a very limited settings as they assume that \\\"the priors for the rewards of the arms and the reward parameters are Gaussian distributions\\\" [lines 172-173]. They later claim that their algorithm can be \\\"in principle\\\" extended to general priors without giving any detail or example to support their claim. More problematic, their derived regret bounds (Theorem 4.1 and 5.1) rely extensively on those assumptions and do not hold for general priors. This limitation regarding the scope of the paper needs to be included in the title, the abstract, and the introduction.\", \"A second weakness concerns the result derived in Theorem 4.1. Following the last part of the proof in the Appendix, the authors forgot to sum over all the possible $ N $ arms. Taking the sum over all possible arms leads to a regret bound of order $O(N^{3/2}\\\\sqrt{T \\\\log(N)} + N^{2}\\\\overline{C} + N^{2})$. Another concern regarding the regret bound is the enormous multiplicative constant $72(e^{64}+5) \\\\approx 10^{29}$, which raises the question of whether this bound can be applicable in practice.\", \"Another weakness concerns the clarity of the paper and the presentation of the proofs. Notations are at best cumbersome (e.g., $x_{i(t)}(t)^T\\\\mu$), sometimes incoherent (e.g., the history denoted $H_{t-1}$ in the paper and later changed to $F_{t-1}$ in the appendix, mixing notations $\\\\theta$ and $\\\\Theta$), incomplete (e.g., summation symbols without above limit $\\\\sum_{t=1}$ [line 133]), or simply not defined (e.g., $\\\\tau_k$, $\\\\Delta_i$, $\\\\overline{E_i^\\\\theta(t)}$, $\\\\overline{E_i^\\\\mu}(t)$, $c_s$, $L_i(T)$). This seriously hinders the clarity of the paper and the ability to evaluate the proof's correctness. Although the single column allows enough space to write on a single line, authors often split equations or even expressions. Examples can be found on lines 311 to 315 or 684 to 690. The authors also use overfull lines, such as on line 1033. Often, proofs consist simply of a long chain of equalities and inequalities with no justification given to the reader. Sometimes, the authors cite results from previous papers without providing any reference. Overall, the appendix seems to have been written to discourage readers or make it impossible for them to verify the proof.\", \"One more weakness concerns the writing of the paper. This reviewer believes that at least parts of the paper have been written by generative AI tools. Indeed, parts of the paper present typical AI writing patterns, such as repetition of ideas and phrases, overuse of specific writing structures, and generic or high-level descriptions. One example is in the introduction between lines 36 and 47:\", \"\\\"[...]Thompson sampling has several advantages:\", \"Utilizing prior information: By design, Thompson sampling algorithms utilize and benefit from the prior information about the arms.\", \"Easy to implement: While the regret of a UCB algorithm depends critically on the specific choice of upper-confidence bound, Thompson sampling depends only on the best possible choice. This becomes an advantage when there are complicated dependencies among actions, as designing and computing with appropriate upper confidence bounds present significant challenges Russo et al. (2018). In practice, Thompson sampling is usually easier to implement Chapelle \\\\& Li (2011).\", \"Stochastic exploration: Thompson sampling is a random exploration strategy, which could be more resilient under some bandit settings Lancewicki et al. (2021).\", \"Despite the success, Thompson sampling faces the problem of [...]\\\"\", \"Another weakness concerns the experiments performed by the authors. First, one can deplore that the adversarial attacks are not explained. Second, for the stochastic bandit experiment, one can regret that the proposed algorithms are not compared against any other robust algorithm from the literature. Third, the plotted results raise serious questions regarding the quality of the experiments:\", \"on Figure 1, the left and right plots seem to present the same data although their respective titles claim otherwise.\", \"on Figure 1 and Figure 2, the expected regret of TS seems not just close but almost exactly equal to the expected regret of UCB.\", \"on the top part of Figure 2, the titles of the $x$-axes are almost not readable.\", \"on the bottom part of Figure 2, it seems impossible to explain how the expected regret of the proposed algorithm can decrease when the adversarial budget increases. This strongly suggests that there is a mistake in the experiments.\", \"another incoherence in the displayed results that indicated an error in the experiments is the fact that on the top part of Figure 2, the robust version of Thompson Sampling seems to outperform the original Thompson Sampling and UCB algorithm \\\\emph{even with no adversarial attack} (when the adversarial budget is $0$). This simple \\\\emph{sanity check} strongly suggests some mistakes in the code.\", \"there seem to be confidence intervals associated to each point on the plots but no information is given about the level of confidence they represent.\", \"the code for running the experiments is provided without a readme.txt file, or a single comment explaining how to run the code or what is the purpose of the different files and functions. It was therefore not possible to verify the soundness of the experiments.\"], \"questions\": [\"Here is a list of suggestions for the authors.\", \"The first suggestion would be to entirely rewrite the proofs of Theorem 4.1 and Theorem 5.1 in the Appendix. Make sure to properly introduce all the notation with clear definitions to justify all the steps in your proofs (equalities, inequalities, the use of some lemma,\\u2026.) to provide reference to all the lemmata you borrow from previous work. To the best of your capacity, try to write the proofs in a pedagogical way such that a reader familiar with the work can easily follow and asses the steps in the reasoning.\", \"The second suggestion is for the authors to verify and clarify the omission in the last step of Theorem 4.1\\u2019s proof regarding the summation over all arms.\", \"\\\\item The third suggestion is for the authors to provide more details supporting their claim that the algorithm and regret bounds can be extended to general priors. If this is not possible, the authors would have to change the title, abstract, and introduction to acknowledge the limited scope of the work.\", \"\\\\item A fourth suggestion concerns the experimental results. For the Gaussian stochastic bandit setting, the authors should compare the performance of the first suggested algorithm against a competitive, robust algorithm. Also, the authors should provide a detailed description of the adversarial attacks used in the experiments. Also, the authors should address the following issues in Figures 1 and 2:\", \"are the data in the left and right plots of Figure 1 indeed distinct?\", \"why the expected regret of TS is nearly identical to UCB?\", \"clarification on the expected regret\\u2019s decrease as the adversarial budget increases on Figure 3.\", \"the authors should explain the level of confidence for the plotted intervals.\", \"finally, the authors should provide commented code with a description of the different files and functions and a readme.txt file so that a reviewer can reproduce the experiments quickly.\", \"A fifth suggestion would be for the authors to avoid using generative tools to write articles. It is frustrating for a reviewer to question whether they have spent more time writing a review than the author to write their paper.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer CZzm,\\n\\nThanks again for your constructive reviews! We have revised the paper according to your questions and suggestions. Please let us know if you have any follow-up comments or concerns about our responses and revision. We are happy to answer any further questions.\"}", "{\"comment\": \"Dear Reviewer skgH,\\n\\nThanks again for your constructive reviews! We have revised the paper according to your questions and suggestions. Please let us know if you have any follow-up comments or concerns about our responses and revision. We are happy to answer any further questions.\"}", "{\"summary\": \"This paper considers poisoning attacks in multi-armed bandit setting. More specifically, the authors consider Thompson sampling algorithms and aim to design corruption-robust versions of these algorithms. They propose two algorithms, for stochastic and contextual linear settings, respectively, robust to poisoning attacks against an attacker that has limited corruption budget. The authors argue that their results are near-optimal, and they provide simulation results comparing their approaches to validate their theoretical findings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is interesting and enjoyable to read. It clearly explains the most important ideas and positions its contributions well with respect to prior work on corruption-robust bandits.\", \"To my knowledge, prior work has not studied Thompson sampling algorithms that are robust to reward poisoning attacks. The proof techniques in the paper appear to be standard, but I found the analysis non-trivial. While I didn't check all the proofs in great detail, the arguments provided in the proof sketches appear sound.\", \"The proposed algorithms are simple extensions of existing methods and are based on adjusting the posteriors to make them less susceptible to manipulation. The upper bounds on the regret of the proposed algorithms are near-optimal, as argued in the paper, with the arguments relying on the lower bounds from prior work.\", \"The paper supports its theoretical results with experiments.\"], \"weaknesses\": [\"While I found the results interesting, they are also somewhat expected, given the findings from prior work on poisoning attacks and corruption robustness in bandits.\", \"The experiments related to the stochastic MAB setting do not include a corruption-robust baseline. For the contextual bandit setting, the performance of the proposed method is similar to the corruption-robust baseline but is often worse. It would also be useful to include a richer set of attack strategies in the experiments.\", \"While I believe that the paper is overall well-written, the presentation could be improved and polished. There are quite a few typos in the text, and some of the figures in the experimental section do not adequately label the axes on the plots (see Figs. 2 and 4). Moreover, the fonts in the figures are too small. It also seems that some references are cited incorrectly. For example, the paper frequently cites He et al. (2023), which appears to be about *Nearly Minimax Optimal Reinforcement Learning for Linear Markov Decision Processes*. It would be helpful if the authors clarified this.\"], \"questions\": \"I did not fully follow the intuition behind Algorithm 2 and the corresponding proof sketch. More specifically, it would be helpful if the authors elaborated further on their two remarks:\\n- *...the attacker can apply less influence on the estimator by corrupting such data...*, \\n- *...variance of the posterior distribution is limited due to the weighted estimator under reward poisoning...* \\n\\nPlease see *Strengths* and *Weaknesses* for my other comments and questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work studies the setting where the revealed rewards are corrupted. So, the learning agent cannot use the true reward to do posterior sampling. They consider two bandit variants with corruption: stochastic bandit and linear contextual bandit. They also propose efficient algorithms.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"a new problem\", \"weaknesses\": \"problem set up: I still think the proposed problem is a special case (actually a simpler case) of differentially private online learning, based on lines 149 to 151.\", \"proposed_algorithms\": \"they is not that interesting nor novel, simply re-shaping the posterior distribution in an optimistic way. This idea has been used in Hu and Hedge, 2022.\", \"questions\": \"1. what does not mean of \\\"optimality in the face of corruption\\\" in line 65?\\n\\n2. In line 795, the constant is 72 e^{64}. You can improve the constant by using some analysis in https://arxiv.org/abs/2407.14879.\\n\\nAlso, I think the aforementioned paper is quite related to this work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper examines modified Thompson Sampling algorithms for Gaussian bandits and contextual linear bandits with Gaussian priors, focusing on mitigating adversarial attacks that perturb rewards. The first modification introduces an optimistic biased distribution for Gaussian stochastic bandits, while the second adjusts posterior updates for contextual linear bandits based on the current context. Both methods aim to reduce expected regret under adversarial conditions. Experiments show the effectiveness of these modifications compared to UCB and Thompson Sampling.\\n\\nWhile the effort put by the authors to improve the paper is commendable, as pointed out by the reviewers, it's hard to judge the correctness of the proofs and efficacy of the results with these many modifications. Moreover, there are remaining concerns about the correctness of the proofs and the nature of the empirical results (e.g., Fig 3), which are non-intuitive. Based on these, I suggest that the authors submit to the next suitable venue after incorporating all the reviewer feedback.\", \"additional_comments_on_reviewer_discussion\": \"See above.\"}", "{\"comment\": \"Thanks for your constructive comments and suggestions. Based on the reviews, the revision has been uploaded, and the most important changes are highlighted in blue. Below is our response to your concerns.\", \"q1\": \"I still think the proposed problem is a special case (actually a simpler case) of differentially private online learning, based on lines 149 to 151; This idea has been used in [1]\", \"a1\": \"Differentially private setting can be thought of as a robustness against poisoning attack setting. The attacker can modify a certain number of data points, and a differentially private algorithm needs to ensure that its behavior is similar under any possible attacks. However, there are also two main differences between the two settings.\\n\\n1. Robustness goal: in the differentially private setting, the goal of the agent is to ensure that the decisions it makes under corruption are similar to the decisions without corruption. As defined in [1], let $M$ be the algorithm and $X, X\\u2019$ be two neighboring reward sequences to represent the original and corrupted rewards. Then the agent needs to ensure that $P\\\\{M(X) \\\\in D\\\\} \\\\leq e^\\\\epsilon P\\\\{M(X\\u2019) \\\\in D\\\\}$ holds for any decision set $D$ and a small value of $\\\\epsilon$; In the reward poisoning attack setting considered in our work, the goal of the learning agent is to minimize the total regret/ maximize the total revenue $\\\\sum_{t} r(t)$. \\n\\n2. Attack constraint: Let $c(t)$ be the corruption at time $t$. In the differentially private setting, the attacker is constrained by the total number of data being corrupted $\\\\sum \\\\mathbb{1}[c(t) \\\\neq 0] \\\\leq C$; In the reward poisoning attack setting considered in our work, the attacker is constrained by the total amount of corruption $\\\\sum_{t} |c(t)| \\\\leq C$. Note that there is no constraint on the number of data being corrupted in the reward poisoning setting, and it can be as large as the number of all data: $\\\\sum \\\\mathbb{1}[c(t) \\\\neq 0]$=T$. \\n\\nTherefore, the poisoning attack setting is not a special case in a differentially private setting. \\n\\nAs a result, a differentially private algorithm like the one in [1] is not necessarily a robust algorithm against reward poisoning attacks. A differentially private algorithm can only guarantee that when some of the data are corrupted, the regret of the algorithm remains low. In the reward poisoning attack setting, even under a limited corruption budget $\\\\sum_{t} |c(t)| \\\\leq C$, every single data point can be corrupted: $\\\\sum \\\\mathbb{1}[c(t) \\\\neq 0]=T$. For an $\\\\epsilon$-differentially private algorithm proposed in [1], for two reward sequences $X$ and $X\\u2019$ that differ everywhere, it can only guarantee that $P\\\\{M(X) \\\\in D\\\\} \\\\leq e^{\\\\epsilon \\\\cdot T} P\\\\{M(X\\u2019) \\\\in D\\\\}$. Setting $\\\\epsilon=o(1/T)$ can make the guarantee non-trivial, but it also results in $\\\\Omega(T)$ regret for the algorithm. It has also been theoretically proved in a different learning setting that the differentially private algorithms are vulnerable to data poisoning attacks because every single data point can be modified under the attack [2].\", \"q2\": \"what does \\\"optimality in the face of corruption\\\" mean in line 65?\", \"a2\": \"\\\"optimality in the face of corruption\\\" is a general idea similar to the popular exploration strategy \\u201coptimality in the face of uncertainty.\\u201d In \\u201coptimality in the face of uncertainty,\\u201d the agent optimistically estimates the best possible reward for an arm considering the uncertainty in its evaluation. Similarly, in \\\"optimality in the face of corruption\\\", the agent optimistically estimates the best possible reward for an arm considering the uncertainty and the influence of data corruption on its estimation.\", \"q3\": \"In line 795, the constant is 72 e^{64}. You can improve the constant by using some analysis in https://arxiv.org/abs/2407.14879.\", \"a3\": \"Thanks for the suggestion. We find that it is not straightforward to directly apply the techniques in this work to improve the big constant in our bound, but we will follow the idea in that work and improve our theoretical results in future revisions.\\n\\n[1]: Hu, Bingshan, and Nidhi Hegde. \\\"Near-optimal thompson sampling-based algorithms for differentially private stochastic bandits.\\\" Uncertainty in Artificial Intelligence. PMLR, 2022\\n\\n[2]: Ma, Yuzhe, Xiaojin Zhu, and Justin Hsu. \\\"Data poisoning against differentially-private learners: Attacks and defenses.\\\" arXiv preprint arXiv:1903.09860 (2019).\"}" ] }
EpgoFFUM2q
Matcha: Mitigating Graph Structure Shifts with Test-Time Adaptation
[ "Wenxuan Bao", "Zhichen Zeng", "Zhining Liu", "Hanghang Tong", "Jingrui He" ]
Powerful as they are, graph neural networks (GNNs) are known to be vulnerable to distribution shifts. Recently, test-time adaptation (TTA) has attracted attention due to its ability to adapt a pre-trained model to a target domain, without re-accessing the source domain. However, existing TTA algorithms are primarily designed for attribute shifts in vision tasks, where samples are independent. These methods perform poorly on graph data that experience structure shifts, where node connectivity differs between source and target graphs. We attribute this performance gap to the distinct impact of node attribute shifts versus graph structure shifts: the latter significantly degrades the quality of node representations and blurs the boundaries between different node categories. To address structure shifts in graphs, we propose Matcha, an innovative framework designed for effective and efficient adaptation to structure shifts by adjusting the htop-aggregation parameters in GNNs. To enhance the representation quality, we design a prediction-informed clustering loss to encourage the formation of distinct clusters for different node categories. Additionally, Matcha seamlessly integrates with existing TTA algorithms, allowing it to handle attribute shifts effectively while improving overall performance under combined structure and attribute shifts. We validate the effectiveness of Matcha on both synthetic and real-world datasets, demonstrating its robustness across various combinations of structure and attribute shifts. Our code is available at https://github.com/baowenxuan/Matcha.
[ "test-time adaptation", "distribution shifts", "structure shifts", "graph neural networks" ]
Accept (Poster)
https://openreview.net/pdf?id=EpgoFFUM2q
https://openreview.net/forum?id=EpgoFFUM2q
ICLR.cc/2025/Conference
2025
{ "note_id": [ "unNl9jROp4", "umMHVd4z2Z", "rClcuTeXkx", "mQ3d4pDNVP", "l0uNfmRstm", "hKxYQHiRs7", "gcRpUTlPp5", "f4ujaucrlY", "b6rLLdJ78e", "avf0cP7p0n", "aQOX6FVWaU", "ZXT3JwaApv", "YtPoc1nawf", "YEjd4WaWnD", "WJTXodrYEu", "W5JsGxbZJa", "UNsn0wWD5p", "ONUPjDFfLA", "KI9Hg1mgTQ", "J3I11TAiuW", "8iqy6ZuOJm", "1aOTie6P7S" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "decision", "official_comment", "comment", "official_comment", "meta_review", "official_comment", "official_comment" ], "note_created": [ 1732227858551, 1732225059286, 1733078557440, 1732228721917, 1733264574233, 1732228438694, 1730512957570, 1730667471089, 1732226546468, 1732227717947, 1730658641245, 1730176339852, 1730648892614, 1732226396811, 1732227378660, 1737523676624, 1732226263761, 1739384044375, 1732747831842, 1734436563286, 1732227501370, 1732225350590 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5006/Authors" ], [ "ICLR.cc/2025/Conference/Submission5006/Authors" ], [ "ICLR.cc/2025/Conference/Submission5006/Authors" ], [ "ICLR.cc/2025/Conference/Submission5006/Authors" ], [ "ICLR.cc/2025/Conference/Submission5006/Authors" ], [ "ICLR.cc/2025/Conference/Submission5006/Authors" ], [ "ICLR.cc/2025/Conference/Submission5006/Reviewer_KHid" ], [ "ICLR.cc/2025/Conference/Submission5006/Reviewer_jAsf" ], [ "ICLR.cc/2025/Conference/Submission5006/Authors" ], [ "ICLR.cc/2025/Conference/Submission5006/Authors" ], [ "ICLR.cc/2025/Conference/Submission5006/Reviewer_eWx1" ], [ "ICLR.cc/2025/Conference/Submission5006/Reviewer_M4v2" ], [ "ICLR.cc/2025/Conference/Submission5006/Reviewer_qcfU" ], [ "ICLR.cc/2025/Conference/Submission5006/Authors" ], [ "ICLR.cc/2025/Conference/Submission5006/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5006/Authors" ], [ "~Wenxuan_Bao1" ], [ "ICLR.cc/2025/Conference/Submission5006/Authors" ], [ "ICLR.cc/2025/Conference/Submission5006/Area_Chair_4mqY" ], [ "ICLR.cc/2025/Conference/Submission5006/Authors" ], [ "ICLR.cc/2025/Conference/Submission5006/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer KHid (Part 2)\", \"comment\": \"# W2. Analysis of Multi-Layer GCNs\\n\\n> In Section 3.2, the authors discuss the distinct impacts of attribute and structure shifts; however, using a single-layer GCN as an example may not be sufficient. Given that adjusting hop-aggregation parameters typically involves integrating multi-layer GCNs and broader neighborhood information, it would be beneficial to explore the impact of multi-layer GCNs in the model. \\n\\nThank you for this great suggestion. We agree that exploring the impact of multi-layer GCNs in the model would be beneficial, as their performance can be influenced by more complex structure shifts beyond degree shifts and homophily shifts (e.g., clustering coefficients). As suggested, we have extended our analysis to $K$-hop GCNs, which aggregate information from nodes' own features as well as their 1-hop to $K$-hop neighbors. Detailed theoretical analysis is provided in Appendix A.7, and also in [this anonymous repo](https://anonymous.4open.science/r/AdaRC-ICLR-Rebuttal-93E4/AdaRC_multihop_gcn.pdf) for your convenience. All our propositions naturally generalize to $K$-hop GCNs, effectively capturing these more intricate structure shifts.\\n\\n# W3. Mechanisms of How Structure Shifts Affect GNN Performance\\n\\n> Given that the process of using the PIC loss to update hop-aggregation parameters intuitively focuses on optimizing the distinction of node representations (i.e., separating node attribute boundaries), it is recommended to further explore the specific mechanisms by which various structural shifts affect GNN performance. This would help to enhance the theoretical depth and rigor of the analysis. \\n\\nThank you for your insightful suggestion. We agree that exploring the mechanisms by which structure shifts affect GNN performance is beneficial. In our work, Proposition 3.4 provides detailed analysis of how structure shifts influence performance. Specifically, the effects of homophily and degree shifts can be isolated by setting $\\\\Delta h$ or $\\\\Delta d$ to 0, allowing for a focused examination of each factor. Furthermore, Proposition 3.5 illustrates how the optimal hop-aggregation parameter $\\\\gamma$ depends on homophily and degree, offering additional theoretical insights into their role in adapting to structural shifts.\"}", "{\"title\": \"Response to All Reviewers (Part 1)\", \"comment\": \"Dear Reviewers,\\n\\nWe sincerely thank you for your valuable feedback and for recognizing the strengths of our work. Below, we summarize the main contributions and key strengths of our paper: \\n- **Addressing an Important Yet Overlooked Problem** (jAsf). Our work focuses on the critical yet previously overlooked problem of structure shifts in graph data, in the context of TTA. \\n- **Detailed Theoretical Analysis** (jAsf, eWx1, qcfU). We provide rigorous theoretical analysis of the impact of structure shifts on single-layer GCNs using CSBM graphs, offering deep insights into the challenges posed by these shifts. \\n- **Simple, Effective, and Compatible Method** (all reviewers). Our proposed AdaRC framework is simple yet effective in improving node representation quality and addressing structure shifts. It is fully compatible with existing TTA algorithms, enabling broader applicability and enhanced performance.\\n- **Comprehensive Experiments** (qcfU, M4v2). Extensive experiments on both synthetic and real-world datasets validate the robustness and effectiveness of AdaRC across various types of shifts, particularly structure shifts.\\n\\nWe deeply appreciate your constructive suggestions and would like to address some common questions raised about our work \\n\\n# 1. Gap between CSBM and Real-World Graphs (jAsf, KHid)\\n\\nOur theoretical analysis primarily focuses on single-layer GCNs and CSBM graphs, which effectively capture homophily and degree shifts, two factors commonly observed in real-world scenarios that significantly influence GNN performance. Furthermore, reviewers' questions have inspired us to further extend our theoretical framework to multi-hop GCNs (in Appendix A.7, and also in [this anonymous repo](https://anonymous.4open.science/r/AdaRC-ICLR-Rebuttal-93E4/AdaRC_multihop_gcn.pdf)), allowing us to analyze more complex structure shifts beyond single-layer settings. \\n- **CSBM Reflects Real-World Structure Shifts**. CSBM graphs have been frequently utilized in theoretical studies [1,2,3] of GNNs due to their ability to model two critical graph metrics, degree and homophily, that significantly influence GNN performance. We believe CSBM effectively captures the challenges posed by structure shifts commonly encountered in real-world applications.\\n- **Extending Analysis to Multi-Hop GCNs**. We agree that real-world graphs exhibit structure shifts beyond homophily and degree, such as clustering coefficients, which can also impact the performance of multi-layer GNNs. To address this, we have extended our analysis to $K$-hop GCNs, which aggregate information from nodes' own features as well as their 1-hop to $K$-hop neighbors. Detailed theoretical analysis is provided in Appendix A.7 and also in [this anonymous repo](https://anonymous.4open.science/r/AdaRC-ICLR-Rebuttal-93E4/AdaRC_multihop_gcn.pdf) for your convenience. All our propositions naturally generalize to $K$-hop GCNs, capturing more complex structure shifts. \\n\\n[1] Yao Ma, Xiaorui Liu, Neil Shah, Jiliang Tang: Is Homophily a Necessity for Graph Neural Networks? ICLR 2022\\n\\n[2] Haitao Mao, Zhikai Chen, Wei Jin, Haoyu Han, Yao Ma, Tong Zhao, Neil Shah, Jiliang Tang: Demystifying Structural Disparity in Graph Neural Networks: Can One Size Fit All? NeurIPS 2023\\n\\n[3] Yujun Yan, Milad Hashemi, Kevin Swersky, Yaoqing Yang, Danai Koutra: Two Sides of the Same Coin: Heterophily and Oversmoothing in Graph Convolutional Neural Networks. ICDM 2022\\n\\n# 2. Performance on Real-World Graphs (jAsf, eWx1)\\nThe relatively lower performance on Twitch-E and OGB-Arxiv datasets is due to the higher uncertainty and more complex distribution shifts present in these datasets, which make the classification task harder. This observation is consistent with results in related works [4]. We also experimented with using cross-entropy loss instead of PIC loss. Even when the model was aware of 1\\\\% of the testing labels, the accuracy achieved was only 58.21\\\\% on Twitch and 42.12\\\\% on Arxiv. In comparison, our PIC loss achieved 56.76\\\\% and 41.74\\\\% accuracy, respectively, without requiring any testing labels. \\n\\n[4] Wei Jin, Tong Zhao, Jiayuan Ding, Yozen Liu, Jiliang Tang, Neil Shah: Empowering Graph Representation Learning with Test-Time Graph Transformation. ICLR 2023\"}", "{\"title\": \"Friendly Reminder Regarding Rebuttal Discussion\", \"comment\": \"Dear Reviewers,\\n\\nThank you for taking the time to provide thoughtful feedback on our submission, \\\"AdaRC: Mitigating Graph Structure Shifts during Test-Time.\\\" We\\u2019ve greatly benefited from your comments and hope our rebuttal addressed your questions and concerns effectively.\\n\\nAs December 2 is the final day to post feedback to the authors, we wanted to kindly remind you that we are happy to engage in further discussions or clarify any remaining points if needed.\\n\\nWe appreciate your effort in reviewing our work and look forward to hearing your thoughts.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer M4v2 (Part 2)\", \"comment\": \"# Q1. Real-Time Shift\\n\\n> If the graph structure is shifting in real-time, that is, there is no stable distribution in testing time, will the proposed algorithm still work well? \\n\\nThank you for raising this question. While our experiments in the paper mainly focused on adapting to a single graph with a stable distribution, we also evaluated AdaRC\\u2019s performance on a stream of graphs with evolving structure shifts. Specifically, we used the Syn-Cora dataset, where the model was pre-trained on a source graph with a homophily of 0.8. It was then sequentially adapted to five target graphs with homophilies of 0.1, 0.7, 0.3, 0.9, and 0.2, simulating continuously changing homophily. The experimental setup, except for the sequence of target graphs, was identical to that in the Syn-Cora experiments described in the main text.\\n\\nFor example, in the column corresponding to the target graph with a homophily of 0.3, we compared two scenarios: (1) static: directly adapting from 0.8 \\u2192 0.3, and (2) evolving: sequentially adapting through 0.8 \\u2192 0.1 \\u2192 0.7 \\u2192 0.3. \\n\\nTable A. Accuracy of AdaRC on static target graph and evolving target graph (mean \\u00b1 s.d.)\\n| Method | Setting | Target graph (h = 0.1) | Target graph (h = 0.7) | Target graph (h = 0.3) | Target graph (h = 0.9) | Target graph (h = 0.2) |\\n|--------|----------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|\\n| ERM | Static | 63.19 \\u00b1 1.28 | 88.88 \\u00b1 0.61 | 71.46 \\u00b1 0.62 | 97.15 \\u00b1 0.32 | 65.67 \\u00b1 0.35 |\\n| AdaRC | Static | 79.75 \\u00b1 1.04 | 90.57 \\u00b1 0.47 | 79.68 \\u00b1 0.73 | 97.40 \\u00b1 0.28 | 78.96 \\u00b1 1.08 |\\n| AdaRC | Evolving | 79.75 \\u00b1 1.04 | 90.65 \\u00b1 0.33 | 77.43 \\u00b1 0.62 | 97.31 \\u00b1 0.42 | 78.26 \\u00b1 1.02 |\\n\\nThe results, shown in Table A above, indicate that **AdaRC achieves performance on evolving graphs highly comparable to that on static graphs**. This demonstrates AdaRC\\u2019s ability to handle dynamic scenarios effectively, even under continuously changing graph structures.\"}", "{\"title\": \"Brief Summary of Contributions and Modifications\", \"comment\": \"We sincerely thank all reviewers for their constructive feedback and suggestions, which have greatly improved the quality of our paper. As the discussion period concludes, we would like to provide a brief summary of our paper\\u2019s contributions and the modifications made during the rebuttal:\\n\\n# Contributions\\n1. **Addressing an Important Yet Overlooked Problem**: Our work focuses on the critical yet underexplored issue of structure shifts in graph data within the context of TTA.\\n2. **Detailed Theoretical Analysis**: We present rigorous theoretical analysis, uncovering the distinct impact patterns of attribute and structure shifts on GNNs.\\n3. **Simple, Effective, and Compatible Method**: Our proposed AdaRC framework is simple yet effective in improving node representation quality under structure shifts. It is fully compatible with existing TTA algorithms, enabling broader applicability and enhanced performance.\\n4. **Comprehensive Experiments**: Extensive experiments on synthetic and real-world datasets validate the robustness and effectiveness of AdaRC across various types of shifts, particularly structure shifts.\\n\\n# Rebuttal Modifications\\n1. **Intuitive Examples**: To better illustrate our theoretical findings, we added visualizations in Appendix A.2.\\n2. **Extension of Theory**: We extended our theoretical analysis to multi-hop GCNs, to analyze the impact of structure shifts beyond homophily and degree shifts, as detailed in Appendix A.7.\\n3. **Additional Experiments**:\\n 1. **Scalability**: We evaluated AdaRC\\u2019s computational cost with increasing graph sizes (Appendix C.8).\\n 2. **More Architectures**: We provided additional full experiments combining AdaRC with various GNN architectures (Appendix C.9).\\n 3. **Evolving Graphs**: We tested AdaRC on streams of graphs with evolving structure shifts, with results presented in Table A of [this response](https://openreview.net/forum?id=EpgoFFUM2q&noteId=1aOTie6P7S).\\n\\nWe would also like to respectfully clarify that Reviewer M4v2's evaluation may partially stem from a misunderstanding of our theoretical analysis, as their stated confidence level is only 1. Specifically, they suggested that our algorithm relies on theoretical assumptions. However, AdaRC does not depend on these assumptions; rather, we use CSBM as a common analytical framework to study the challenges of homophily shift and degree shift. To demonstrate the generalizability of AdaRC, we have conducted extensive experiments on real-world datasets, as shown in Table 2.\\n\\nThat said, we are grateful for Reviewer M4v2's suggestion regarding high-level intuitions, which prompted us to include additional visualizations in Appendix A.2 to enhance accessibility and understanding.\\n\\nOnce again, we deeply appreciate the time and effort each reviewer has invested in providing valuable feedback.\"}", "{\"title\": \"Response to Reviewer M4v2 (Part 1)\", \"comment\": \"Dear Reviewer M4v2,\\n\\nThank you for recognizing the novelty of the PIC loss, the theoretical guarantees of AdaRC, and the effectiveness of the algorithm demonstrated through experiments on both synthetic and real-world datasets. Your acknowledgment of these contributions is greatly appreciated. Below, we address your questions and concerns: \\n\\n# W1. High-level Intuitions of Math\\n\\n> The paper is not easy to follow, especially for readers without years of experiences in graph neural networks. It would be nice if the authors could provide high level intuitions behind their algorithms, before giving mathematical statements.\\n\\nThank you for your suggestion. We agree that providing high-level intuitions alongside mathematical statements can make the paper more accessible. Below, we outline the intuitive explanations behind the key components of our theoretical analysis:\\n- **Attribute shifts and structure shifts have different impact patterns (Proposition 3.3 and 3.4)**. We visually illustrated this distinction in Figure 2. Attribute shifts cause a translational change in the overall distribution, where nodes from different classes remain separable. In contrast, structure shifts mix the distributions of node representations from different classes, leading to reduced discriminative power. \\n- **Adjusting the hop-aggregation parameter can restore the quality of degraded node representations (Proposition 3.5)**. We provided two examples (lines 267 - 286) to demonstrate how adjusting the hop-aggregation parameter $\\\\gamma$ can mitigate the degradation in node representation quality caused by structure shifts. To further enhance clarity, we included visualizations for these examples in Appendix A.2, and also [this anonymous repo](https://anonymous.4open.science/r/AdaRC-ICLR-Rebuttal-93E4/AdaRC_adapt_gamma.pdf) for your convenience. In the visualizations, the first row represents homophily shifts, while the second row represents degree shifts. The first column shows the source graph, the second column shows the target graph before adjusting $\\\\gamma$, and the third column shows the target graph after adjusting $\\\\gamma$. These visualizations show that while homophily shifts and degree shifts degrade representation quality, tuning $\\\\gamma$ effectively alleviates this degradation, improving the separability of node representations.\\n\\n# W2. Assumptions\\n\\n> To my best guess, the proposed method is not data driven. The algorithm relies on several theoretical assumptions in order to extract an invariant node embedding on the shifted graph structure. The major concern is that, such assumptions may not hold in real-world tasks. It is also hard to verify whether the assumptions hold or not for a given task. This greatly limits the practical value of the algorithm.\\n\\nThank you for your thoughtful feedback. We would like to address your concern from three perspectives:\\n- **CSBM Framework as a Common Analytical Framework**. The CSBM framework is widely used [1,2,3] and captures two key types of shifts we consider: homophily shift and degree shift. It provides an interpretable way to analyze these important challenges. \\n- **AdaRC does not depend on CSBM Assumptions**. Our algorithm does not rely on CSBM. We use it solely for theoretical analysis to explain why attribute and structure shifts impact GNNs differently and why adjusting $\\\\gamma$ can restore representation qualities. This does not restrict AdaRC to CSBM scenarios. In practice, AdaRC only requires the graph to exhibit structure shifts, which are common and can often be identified through task knowledge or a validation set.\\n- **Verification on Real-World Datasets**. We have also tested our algorithm on various real-world datasets in Table 2 and observed that AdaRC consistently demonstrates its effectiveness, supporting the general applicability of our approach. \\n\\n[1] Yao Ma, Xiaorui Liu, Neil Shah, Jiliang Tang: Is Homophily a Necessity for Graph Neural Networks? ICLR 2022\\n\\n[2] Haitao Mao, Zhikai Chen, Wei Jin, Haoyu Han, Yao Ma, Tong Zhao, Neil Shah, Jiliang Tang: Demystifying Structural Disparity in Graph Neural Networks: Can One Size Fit All? NeurIPS 2023\\n\\n[3] Yujun Yan, Milad Hashemi, Kevin Swersky, Yaoqing Yang, Danai Koutra: Two Sides of the Same Coin: Heterophily and Oversmoothing in Graph Convolutional Neural Networks. ICDM 2022\"}", "{\"summary\": \"This paper proposes a method to mitigate the impact of structure shifts on graph neural networks (GNNs) during test-time adaptation (TTA). The core innovation lies in the introduction of the prediction-informed clustering (PIC) loss to enhance node representation quality during TTA, alongside adjusting hop-aggregation parameters to handle structure shifts. The approach is designed to be compatible with existing TTA algorithms and shows promising results.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The introduction of the PIC loss to improve node representation quality without labels is a promising approach. This loss function helps better separate classes by minimizing intra-class variance and maximizing inter-class variance, providing a reliable alternative to traditional entropy-based methods.\\n\\n2. AdaRC is a plug&play framework that can be seamlessly integrated into existing TTA algorithms, making the approach highly extensible.\", \"weaknesses\": \"1. The authors mention that traditional TTA methods rely on the quality of node representations, but there are also existing works in graph TTA and DA that similarly address the impact of structure shifts. It is recommended to further discuss the differences and advantages of AdaRC in comparison to these methods, particularly in handling heterophilic graphs or complex structure shifts.\\n\\n2. In Section 3.2, the authors discuss the distinct impacts of attribute and structure shifts; however, using a single-layer GCN as an example may not be sufficient. Given that adjusting hop-aggregation parameters typically involves integrating multi-layer GCNs and broader neighborhood information, it would be beneficial to explore the impact of multi-layer GCNs in the model.\\n\\n3. Given that the process of using the PIC loss to update hop-aggregation parameters intuitively focuses on optimizing the distinction of node representations (i.e., separating node attribute boundaries), it is recommended to further explore the specific mechanisms by which various structural shifts affect GNN performance. This would help to enhance the theoretical depth and rigor of the analysis.\", \"questions\": \"see weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper addresses the issue of structure shifts in Graph Neural Networks at test time. The authors claim that existing test-time adaptation methods are primarily designed for attribute shifts rather than structure shifts. Based on theoretical analysis, they propose a framework called AdaRC, which aggregates hop parameters to restore good node representation, alleviating the negative impact of structure shifts. They also employ pseudo-class labels to optimize the hop-aggregation parameters.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper points out the problem of structure shifts, which is indeed overlooked in previous research on TTA, or not focused explicitly at least.\\n\\n2. The proposed AdaRC framework provides a simple yet effective solution for graphs with structure shifts at test time. It is compatible with existing Test-Time Adaptation algorithms, which allows better adaption.\\n\\n3. The authors conduct a detailed theoretical analysis of the impact of structure shifts on a single-layer GCN on a CSBM-generated graph.\", \"weaknesses\": \"1. The theoretical analysis focuses on single-layer GCNs and CSBM-generated graphs, which represent a simplified model where structure shifts are easily identified. However, the evolving real-world graph structures are far more complex. It is unclear whether the findings on this simplified model still hold for real-world graphs, particularly in domains where the task accuracy is not high.\\n\\n2. The experimental results, which show promising performance, are primarily based on synthetic datasets or real-world datasets with synthetic structure shifts. The verification that the structure shifts existence in the real-world graphs should be included. \\n\\n3. The experimental setup for real-world datasets lacks transparency. The paper mentions injecting structure shifts by deleting a subset of homophilic edges. However, it did not specify the quantity of this deletion and the selection criteria. \\n\\n4. The paper presents full results for only one GNN backbone. Presenting results for different GNN backbones, especially fundamental ones, would provide a more comprehensive understanding of AdaRC's effectiveness and applicability across various architectures.\", \"questions\": \"1. AdaRC is a plug-and-play method, have you explored scenarios where it might conflict with other TTA methods? For instance, if a TTA method focuses on adapting to changes in node features while AdaRC adjusts to structure shifts, could their optimization goals lead them in opposite directions?\\n\\n2. On Twitch-E and OGB-Arxiv, AdaRC shows only minor improvements without artificially injected structure shifts. Does this suggest that the impact of structure shift might be less significant in these real-world datasets? Could this be because the type of structure shift addressed by AdaRC is less prevalent in these contexts?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer eWx1\", \"comment\": \"Dear Reviewer eWx1,\\n\\nWe sincerely thank you for your thoughtful review and for highlighting the strengths of our work, including the effectiveness of the proposed PIC loss and its seamless integration with existing frameworks, as well as the comprehensive analysis of attribute and structure shifts using CSBM. Below, we address your concerns and questions:\\n\\n# W1. Performance on Real-World Graphs\\n\\n> The performance improvements on the Twitch-E and OGB-Arxiv datasets appear marginal. Could the authors provide further analysis to clarify the reasons behind these limited gains?\\n\\nThe relatively lower performance on Twitch-E and OGB-Arxiv datasets is due to the higher uncertainty and more complex distribution shifts present in these datasets, which make the classification task harder. This observation is consistent with results in related works [1]. We also experimented with using cross-entropy loss instead of PIC loss. Even when the model was aware of 1% of the testing labels, the accuracy achieved was only 58.21% on Twitch and 42.12% on Arxiv. In comparison, our PIC loss achieved 56.76% and 41.74% accuracy, respectively, without requiring any testing labels.\\n\\n[1] Wei Jin, Tong Zhao, Jiayuan Ding, Yozen Liu, Jiliang Tang, Neil Shah: Empowering Graph Representation Learning with Test-Time Graph Transformation. ICLR 2023\\n\\n# W2. Gains Achieved Vary\\n\\n> The gains achieved seem to vary significantly across different methods and backbone models. For instance, the Tent method demonstrates the highest improvements. Could the authors offer insights into why this variation occurs?\\n\\nThank you for your observation. The variation in gains across different TTA methods primarily arises from the distinct designs of these algorithms, which make them suitable for different types of attribute shifts. For example, Tent adjusts the parameters of the batch normalization layer, making it particularly effective for handling attribute shifts that resemble large translational changes. On the other hand, T3A updates class prototypes using testing data, which may be more effective for complex shift patterns with smaller magnitudes. We believe that selecting the most appropriate TTA algorithm based on the specific nature of the attribute shift is an interesting direction for future research.\"}", "{\"title\": \"Response to Reviewer KHid (Part 1)\", \"comment\": \"Dear Reviewer KHiD,\\n\\nWe sincerely thank you for your thoughtful review and for recognizing the strengths of our work, including the introduction of the PIC loss for improving node representation quality, and the extensibility of AdaRC as a plug-and-play framework. Below, we address your concerns and suggestions:\\n\\n# W1. Comparison with Existing Graph TTA and DA Methods\\n\\n> The authors mention that traditional TTA methods rely on the quality of node representations, but there are also existing works in graph TTA and DA that similarly address the impact of structure shifts. It is recommended to further discuss the differences and advantages of AdaRC in comparison to these methods, particularly in handling heterophilic graphs or complex structure shifts.\\n\\nThank you for raising this important point. We have provided a general discussion of graph domain adaptation (GDA) and graph test-time adaptation (GTTA) methods in the related work section. We are happy to elaborate further and clarify how AdaRC differs from and improves upon these methods: \\n\\n**Graph domain adaptation** (GDA) aims to transfer knowledge from a labeled source graph to an unlabeled target graph with access to both graphs. Most of the GDA algorithms focus on learning invariant representations over the source and target graphs by adversarial learning or minimizing the distance between source and target. For example, SRGNN [1] minimizes the central moment discrepancy of node representations, and DANE [2] aligns the distribution via adversarial learning. However, these methods do not explicitly differentiate between attribute shifts and structure shifts, treating them as a single issue to be solved through invariant representation learning. Recently, StruRW [3] addresses structure shifts explicitly by reweighting edge weights in the source graph to align with the target graph. While these approaches offer useful insights, they rely heavily on access to both source and target graphs simultaneously, making them inapplicable to TTA scenarios where only the target graph is available during adaptation. \\n\\n**Graph test-time adaptation** (GTTA) aims to adapt a pre-trained GNN to an unlabeled target graph, without re-accessing the source domain during adaptation. Recent works such as GTrans, SOGA, and GraphPatcher take varied approaches to this problem. GTrans [4] modifies the target graph\\u2019s features and adjacency matrix during testing by minimizing a contrastive loss. While flexible and data-centric, it suffers from high computational complexity and a large parameter space. SOGA [5] maximizes mutual information between inputs and outputs and enforces consistency between neighboring or structurally similar nodes. However, SOGA is designed for homophilic graphs and thus performs relatively poorly on heterophilic graphs. GraphPatcher [6] focuses primarily on degree shifts, generating virtual nodes to improve predictions on low-degree nodes. In comparison, AdaRC is designed to handle a broad range of structure shifts, including homophily and degree shifts. Additionally, AdaRC achieves this with better computational efficiency, making it a practical and effective solution for TTA scenarios. \\n\\n[1] Qi Zhu, Natalia Ponomareva, Jiawei Han, Bryan Perozzi: Shift-Robust GNNs: Overcoming the Limitations of Localized Graph Training data. NeurIPS 2021\\n\\n[2] Yizhou Zhang, Guojie Song, Lun Du, Shuwen Yang, Yilun Jin: DANE: Domain Adaptive Network Embedding. IJCAI 2019\\n\\n[3] Shikun Liu, Tianchun Li, Yongbin Feng, Nhan Tran, Han Zhao, Qiang Qiu, Pan Li: Structural Re-weighting Improves Graph Domain Adaptation. ICML 2023\\n\\n[4] Wei Jin, Tong Zhao, Jiayuan Ding, Yozen Liu, Jiliang Tang, Neil Shah: Empowering Graph Representation Learning with Test-Time Graph Transformation. ICLR 2023\\n\\n[5] Haitao Mao, Lun Du, Yujia Zheng, Qiang Fu, Zelin Li, Xu Chen, Shi Han, Dongmei Zhang: Source Free Graph Unsupervised Domain Adaptation. WSDM 2024\\n\\n[6] Mingxuan Ju, Tong Zhao, Wenhao Yu, Neil Shah, Yanfang Ye: GraphPatcher: Mitigating Degree Bias for Graph Neural Networks via Test-time Augmentation. NeurIPS 2023\"}", "{\"summary\": \"This work proposes the AdaRC framework, a test-time adaptation approach for graph neural networks (GNNs). At the core of AdaRC is a prediction-informed clustering (PIC) loss, designed to seamlessly integrate with existing methods. Experimental results demonstrate the effectiveness of PIC in enhancing GNN performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper analyzes the performance gap in a simple single-layer Graph Convolutional Network (GCN) using graphs generated from the Community Structure Benchmark Model (CSBM). It effectively investigates the impacts of attribute shifts and structural shifts on performance.\\n\\nThe proposed PIC loss is both simple and effective, allowing for seamless integration with existing frameworks and methods. Its effectiveness is validated through experimental results.\", \"weaknesses\": [\"It is worth noting that, as I am not deeply familiar with this specific area, my comments here may lack technical depth. I will rely on the insights from other expert reviewers and will focus primarily on their evaluations. Here, a few potential concerns include:\", \"The performance improvements on the Twitch-E and OGB-Arxiv datasets appear marginal. Could the authors provide further analysis to clarify the reasons behind these limited gains?\", \"The gains achieved seem to vary significantly across different methods and backbone models. For instance, the Tent method demonstrates the highest improvements. Could the authors offer insights into why this variation occurs?\"], \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"[Update]\\nThanks for the authors' feedback. My concerns have been addressed. I raised my vote accordingly.\\n\\nThe authors present a method called AdaRC, which is designed to improve the accuracy of Graph Neural Networks (GNNs) under testing time graph structure shifting conditions. It achieves this by adapting GNN architectures in real-time based on the changing graph structure during testing. The key idea is to introduce a prediction-informed clustering loss which leads to a better learned node representation. The authors demonstrate the superiority of AdaRC algorithm on several popular benchmark datasets with comparisons to popular baseline methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Propose a novel loss (PIC loss) to improve the robustness of learned graph embedding when structure shifting is presented.\", \"The algorithm comes with theoretical guarantees, although I did not check the correctness of the proof.\", \"Experiments on both controlled synthetic datasets and real-world datasets show the effectiveness of the proposed algorithm.\"], \"weaknesses\": [\"The paper is not easy to follow, especially for readers without years of experiences in graph neural networks. It would be nice if the authors could provide high level intuitions behind their algorithms, before giving mathematical statements.\", \"To my best guess, the proposed method is not data driven. The algorithm relies on several theoretical assumptions in order to extract an invariant node embedding on the shifted graph structure. The major concern is that, such assumptions may not hold in real-world tasks. It is also hard to verify whether the assumptions hold or not for a given task. This greatly limits the practical value of the algorithm.\"], \"questions\": [\"If the graph structure is shifting in real-time, that is, there is no stable distribution in testing time, will the proposed algorithm still work well?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes AdaRC, a novel framework aimed at improving graph neural networks (GNNs) under distribution shifts, particularly those involving changes in graph structure. Traditional test-time adaptation (TTA) methods are typically designed to handle shifts in node attributes, but they perform poorly with structure shifts, where connectivity patterns vary significantly between training and test data. AdaRC addresses this gap by dynamically adjusting the hop-aggregation parameters in GNNs, effectively enhancing node representation quality under structure shifts. Additionally, a prediction-informed clustering (PIC) loss is introduced to encourage clear separations between node categories, further improving adaptation. Extensive experiments demonstrate AdaRC's compatibility with existing TTA methods, yielding notable performance improvements on both synthetic and real-world datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"AdaRC introduces a new approach to handling structure shifts by adapting hop-aggregation parameters, which allows the model to improve node representation quality in ways traditional TTA methods cannot.\", \"The experimental evaluation is comprehensive, covering both synthetic and real-world datasets, and demonstrates substantial improvements in accuracy and robustness across different shifts, particularly structure shifts.\", \"The theoretical insights into the impact of structure and attribute shifts are clearly explained, and the figures and visualizations effectively illustrate the method's impact.\"], \"weaknesses\": [\"While the PIC loss offers advantages, its application may face scalability challenges with very large graphs due to the computational overhead introduced by clustering operations. The method's efficiency under extreme graph sizes could be explored further.\", \"AdaRC's performance relies on the optimization of hop-aggregation parameters, which may not generalize across all GNN architectures without tuning. This reliance might limit its applicability for varied and complex GNN models.\", \"AdaRC has been evaluated on static graphs with controlled shifts. Its applicability to dynamic graphs, where structure shifts occur over time, is not explored, which could be crucial for certain real-time applications.\", \"The performance of AdaRC's clustering approach may be sensitive to the quality of initial predictions, particularly in cases where pseudo-labeling accuracy is low, potentially affecting the model\\u2019s stability in scenarios with noisy initial data.\"], \"questions\": [\"Can AdaRC be adapted to handle continuously evolving graphs where node connectivity changes over time?\", \"How does the computational efficiency of AdaRC scale when applied to graphs with millions of nodes?\", \"Could PIC loss performance be enhanced by integrating more robust initialization methods for pseudo-labeling in low-confidence scenarios?\", \"Does the framework support automated tuning of hop-aggregation parameters across different GNN architectures, or would manual tuning be required for optimal results?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer jAsf (Part 2)\", \"comment\": \"# Q1. Conflict between AdaRC and BaseTTA\\n\\n> AdaRC is a plug-and-play method, have you explored scenarios where it might conflict with other TTA methods? For instance, if a TTA method focuses on adapting to changes in node features while AdaRC adjusts to structure shifts, could their optimization goals lead them in opposite directions?\\n\\nThank you for the insightful question. Usually, AdaRC and other TTA methods are orthogonal in their design. AdaRC focuses on improving the quality of node representations by addressing structure shifts, while TTA methods typically focus on better utilizing the node representations. This distinction minimizes the likelihood of conflicting optimization goals.\\n\\nIn our experiments, we did not observe any cases where combining AdaRC with a BaseTTA method, both of which improve performance individually, resulted in a performance drop. However, we agree that using a stronger BaseTTA usually leads to better overall performance. \\n\\n# Q2. Performance on Real-World Graphs\\n\\n> On Twitch-E and OGB-Arxiv, AdaRC shows only minor improvements without artificially injected structure shifts. Does this suggest that the impact of structure shift might be less significant in these real-world datasets? Could this be because the type of structure shift addressed by AdaRC is less prevalent in these contexts?\\n\\nThe relatively lower performance on Twitch-E and OGB-Arxiv datasets is due to the higher uncertainty and more complex distribution shifts present in these datasets, which make the classification task harder. This observation is consistent with results in related works [3]. We also experimented with using cross-entropy loss instead of PIC loss. Even when the model was aware of 1% of the testing labels, the accuracy achieved was only 58.21% on Twitch and 42.12% on Arxiv. In comparison, our PIC loss achieved 56.76% and 41.74% accuracy, respectively, without requiring any testing labels.\\n\\n[3] Wei Jin, Tong Zhao, Jiayuan Ding, Yozen Liu, Jiliang Tang, Neil Shah: Empowering Graph Representation Learning with Test-Time Graph Transformation. ICLR 2023\"}", "{\"title\": \"Response to Reviewer qcfU (Part 1)\", \"comment\": \"Dear Reviewer qcfU,\\n\\nWe sincerely thank you for your thoughtful review and for recognizing the strengths of our work, including AdaRC's novel approach to addressing structure shifts, its compatibility with existing TTA methods, the clarity of our theoretical insights, and the comprehensiveness of our experimental evaluations. Below, we address the concerns and questions raised in your review:\\n\\n# W1 & Q2. Scalability\\n\\n> W1. While the PIC loss offers advantages, its application may face scalability challenges with very large graphs due to the computational overhead introduced by clustering operations. The method's efficiency under extreme graph sizes could be explored further.\\n\\n> Q2. How does the computational efficiency of AdaRC scale when applied to graphs with millions of nodes?\\n\\nThank you for raising this important point. We would like to clarify that AdaRC does not perform additional clustering operations. Instead, it leverages the pseudo-labels from the BaseTTA method as the clustering results. This design ensures that the computational overhead of calculating the PIC loss is linear with respect to the number of nodes in the graph.\\n\\nTo further validate this, we have conducted experiments on graphs of varying sizes from 1 million to 10 million nodes. The result in given in Figure 12 of Appendix C.8, and also in [this anonymous repo](https://anonymous.4open.science/r/AdaRC-ICLR-Rebuttal-93E4/AdaRC_scalability.pdf) for your convenience. The results confirm that the computational time for AdaRC indeed scales linearly with graph size, demonstrating its efficiency even for large graphs. \\n\\n# W2 & Q4. Automated Tuning of Hop-Aggregation Parameters\\n\\n> W2. AdaRC's performance relies on the optimization of hop-aggregation parameters, which may not generalize across all GNN architectures without tuning. This reliance might limit its applicability for varied and complex GNN models.\\n\\n> Q4. Does the framework support automated tuning of hop-aggregation parameters across different GNN architectures, or would manual tuning be required for optimal results?\\n\\nFor the GNNs discussed in the paper (GPRGNN, APPNP, GCNII, JKNet), AdaRC supports automated tuning of hop-aggregation parameters by minimizing the PIC loss, without requiring any manual tuning. Specifically, while the original implementations of these frameworks may treat hop-aggregation parameters as hyperparameters (e.g., the teleport probability $\\\\alpha$ in APPNP), we simply treat these parameters as trainable parameters. This allows us to optimize them automatically through gradient descent to minimize the PIC loss, effectively enabling automated adjustment of hop-aggregation parameters. \\n\\n# W3 & Q1. Performance on Evolving Graphs\\n\\n> W3. AdaRC has been evaluated on static graphs with controlled shifts. Its applicability to dynamic graphs, where structure shifts occur over time, is not explored, which could be crucial for certain real-time applications.\\n\\n> Q1. Can AdaRC be adapted to handle continuously evolving graphs where node connectivity changes over time?\\n\\nWhile our experiments in the paper mainly focused on adapting to a single graph with a stable distribution, we also evaluated AdaRC\\u2019s performance on a stream of graphs with evolving structure shifts. Specifically, we used the Syn-Cora dataset, where the model was pre-trained on a source graph with a homophily of 0.8. It was then sequentially adapted to five target graphs with homophilies of 0.1, 0.7, 0.3, 0.9, and 0.2, simulating continuously changing homophily. The experimental setup, except for the sequence of target graphs, was identical to that in the Syn-Cora experiments described in the main text.\\n\\nFor example, in the column corresponding to the target graph with a homophily of 0.3, we compared two scenarios: (1) static: directly adapting from 0.8 \\u2192 0.3, and (2) evolving: sequentially adapting through 0.8 \\u2192 0.1 \\u2192 0.7 \\u2192 0.3. \\n\\nTable A. Accuracy of AdaRC on static target graph and evolving target graph (mean \\u00b1 s.d.)\\n| Method | Setting | Target graph (h = 0.1) | Target graph (h = 0.7) | Target graph (h = 0.3) | Target graph (h = 0.9) | Target graph (h = 0.2) |\\n|--------|----------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|\\n| ERM | Static | 63.19 \\u00b1 1.28 | 88.88 \\u00b1 0.61 | 71.46 \\u00b1 0.62 | 97.15 \\u00b1 0.32 | 65.67 \\u00b1 0.35 |\\n| AdaRC | Static | 79.75 \\u00b1 1.04 | 90.57 \\u00b1 0.47 | 79.68 \\u00b1 0.73 | 97.40 \\u00b1 0.28 | 78.96 \\u00b1 1.08 |\\n| AdaRC | Evolving | 79.75 \\u00b1 1.04 | 90.65 \\u00b1 0.33 | 77.43 \\u00b1 0.62 | 97.31 \\u00b1 0.42 | 78.26 \\u00b1 1.02 |\\n\\nThe results, shown in Table A above, indicate that **AdaRC achieves performance on evolving graphs highly comparable to that on static graphs**. This demonstrates AdaRC\\u2019s ability to handle dynamic scenarios effectively, even under continuously changing graph structures.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer jAsf (Part 1)\", \"comment\": \"Dear Reviewer jAsf,\\n\\nWe sincerely thank you for your thoughtful review and for recognizing the strengths of our work, including the identification of structure shifts as an overlooked problem, the simplicity and effectiveness of the AdaRC framework, and the detailed theoretical analysis provided. Below, we address the concerns and questions raised in your review: \\n\\n# W1. Simplified Model for Theoretical Analysis\\n\\n> The theoretical analysis focuses on single-layer GCNs and CSBM-generated graphs, which represent a simplified model where structure shifts are easily identified. However, the evolving real-world graph structures are far more complex. It is unclear whether the findings on this simplified model still hold for real-world graphs, particularly in domains where the task accuracy is not high.\\n\\nWe acknowledge that our theoretical analysis primarily focuses on single-layer GCNs and CSBM-generated graphs. This choice was intentional to provide a rigorous and interpretable foundation for understanding the impact of common structure shifts, including degree and homophily shifts. \\n\\nWe agree that real-world graphs exhibit structure shifts beyond homophily and degree, such as clustering coefficients, which can also impact the performance of multi-layer GNNs. To address this, we have further extended our analysis to $K$-hop GCNs, which aggregate information from nodes' own features as well as their 1-hop to $K$-hop neighbors. Detailed theoretical analysis is provided in Appendix A.7, and also [this anonymous repo](https://anonymous.4open.science/r/AdaRC-ICLR-Rebuttal-93E4/AdaRC_multihop_gcn.pdf) for your convenience. All our propositions naturally generalize to $K$-hop GCNs, capturing more complex structure shifts. Additionally, our analysis does not rely on assumptions about task accuracy, making it broadly applicable. \\n\\n# W2. Verification of Structure Shifts in Real-World Graphs\\n\\n> The experimental results, which show promising performance, are primarily based on synthetic datasets or real-world datasets with synthetic structure shifts. The verification that the structure shifts existence in the real-world graphs should be included.\\n\\nThank you for your suggestion. We agree that verifying the presence of structure shifts in real-world graphs is important and beneficial. Previous studies [1,2] have analyzed common real-world graphs and reported graph metrics such as homophily, degree, demonstrating the prevalence of structure shifts. Additionally, in our work, we have included the metrics for the datasets we used (see Appendix D.1) to provide further insights. \\n\\nIf there is a specific graph dataset that you are particularly interested in, we would be happy to calculate and provide the relevant graph metrics to verify the presence of structure shifts. \\n\\n[1] Eli Chien, Jianhao Peng, Pan Li, Olgica Milenkovic: Adaptive Universal Generalized PageRank Graph Neural Network. ICLR 2021\\n\\n[2] Shikun Liu, Tianchun Li, Yongbin Feng, Nhan Tran, Han Zhao, Qiang Qiu, Pan Li: Structural Re-weighting Improves Graph Domain Adaptation. ICML 2023\\n\\n# W3. Experiment Setup Details\\n\\n> The experimental setup for real-world datasets lacks transparency. The paper mentions injecting structure shifts by deleting a subset of homophilic edges. However, it did not specify the quantity of this deletion and the selection criteria.\\n\\nThank you for your comment. Due to page limits, detailed experimental setup is provided in Appendix D.1, including the number of edges left. Specifically, the selection of homophilic edges for deletion is performed using random sampling without replacement. \\n\\n# W4. GNN Backbones\\n\\n> The paper presents full results for only one GNN backbone. Presenting results for different GNN backbones, especially fundamental ones, would provide a more comprehensive understanding of AdaRC's effectiveness and applicability across various architectures.\\n\\nThank you for your valuable suggestion. We agree that full experiments with other GNN architectures can provide more comprehensive understandings. Following your advice, we have conducted full experiments with different GNN backbones on Syn-Cora. We include all BaseTTAs used in our paper, and their combination with AdaRC. The results are updated in Appendix C.9. The results show that, although different GNN architectures yield varying levels of performance on the target graph, AdaRC consistently enhances accuracy across all cases. This demonstrates that AdaRC is compatible with a wide range of GNN architectures, highlighting its broad applicability and effectiveness.\"}", "{\"title\": \"Algorithm Name Change Notification\", \"comment\": \"We have changed the algorithm name from **AdaRC** to **Matcha**. Please note that all previous reviews and rebuttals refer to the old name. We hope this clarification prevents any confusion.\"}", "{\"title\": \"Thanks for your response\", \"comment\": \"Dear Reviewer KHid,\\n\\nThank you for your insightful questions and for taking the time to review our work and rebuttal in detail. Below, we provide detailed responses to your questions. \\n\\n# Q1. Homophilic and Heterophilic Graphs\\n\\n> As mentioned in the rebuttal, SOGA is designed for homophilic graphs and thus performs relatively poorly on heterophilic graphs. So, how about the proposed SOGA on the homophilic graphs and heterophilic graphs?\\n\\nWe understand your question as asking how our proposed *AdaRC* performs on homophilic and heterophilic graphs.\\n\\nOur AdaRC algorithm does not rely on any homophily assumption. Instead, it dynamically adapts the hop-aggregation parameter to make node representations more discriminative. This adaptability ensures that AdaRC can handle both homophilic and heterophilic graphs effectively, as long as the chosen GNN backbone has sufficient expressiveness to model these graph types. For instance, when using a single-layer GCN as the backbone, AdaRC learns a positive $\\\\gamma$ for homophilic graphs and a negative $\\\\gamma$ for heterophilic graphs, aligning the aggregation process with the underlying graph structure.\\n\\nAs shown in Table 1 of our experiments, we evaluated AdaRC with both homophilic target graphs (hetero $\\\\to$ homo) and heterophilic target graphs (homo $\\\\to$ hetero). AdaRC demonstrates strong performance in both cases. In contrast, SOGA, being designed specifically for homophilic graphs, shows significantly weaker performance on heterophilic target graphs compared to homophilic ones.\\n\\n\\n# Q2. Mechanisms of How Structure Shifts Affect GNN Performance\\n\\n> The Mechanisms of How Structure Shifts Affect GNN Performance should be explained.\\n\\nThank you for your question. We understand that your inquiry follows up on W3 and seeks a standalone explanation of how *homophily shift* and *degree shift* individually lead to a decline in GNN performance. We are happy to provide a more intuitive explanation. \\n\\n**Theoretical analysis.** Using the theoretical analysis of a single-layer GCN on a CSBM graph from our paper as an example, for a positive-class sample, its representation distribution can be expressed as: \\n$$\\n\\\\boldsymbol{z}\\\\_i \\\\sim \\\\mathcal{N} \\\\left( (1 + \\\\gamma h\\\\_i) \\\\boldsymbol{\\\\mu}\\\\_+ + \\\\gamma (1 - h\\\\_i) \\\\boldsymbol{\\\\mu}\\\\_-, \\\\left( 1 + \\\\frac{\\\\gamma^2}{d\\\\_i} \\\\right) \\\\boldsymbol{I} \\\\right)\\n$$\\n\\n- **Homophily shift:** Changes in $h_i$ (node homophily) affect the mean of $\\\\boldsymbol{z}_i$\\u200b, while the variance remains constant. This reduces the inter-class distance as the centroids of the two classes move closer together, but the intra-class variance does not change. Consequently, the representations of different classes overlap more, leading to decreased accuracy.\\n- **Degree shift:** Changes in $d_i$ (node degree) alter the variance of $\\\\boldsymbol{z}_i$, without affecting the mean. This increases intra-class variance while keeping inter-class distance unchanged. Similarly, this increased overlap between class representations reduces the ability to distinguish between classes and degrades accuracy. \\n\\nBoth shifts lead to overlapping representations between different classes, which negatively impacts classification performance.\\n\\n**Visualization.** To further illustrate, we provide two visualization examples in Appendix A.2 (also available in [this anonymous repo](https://anonymous.4open.science/r/AdaRC-ICLR-Rebuttal-93E4/AdaRC_adapt_gamma.pdf) for your convenience):\\n\\n- Under **homophily shift**, the variance of the two class distributions remains constant, but the centroids of the two classes move closer together.\\n- Under **degree shift**, the centroids remain unchanged, but the intra-class variance increases.\\n\\nBoth scenarios clearly demonstrate how structure shifts result in greater overlap between class representations, ultimately reducing accuracy.\\n\\nWe hope this explanation provides a clear understanding of the mechanisms. Please let us know if further clarification or additional details are needed.\"}", "{\"metareview\": \"This paper proposes AdaRc for solving the structure shifts in graphs, which can adjust hop-aggregation parameters in GNN during test-time adaptation.\", \"the_main_strengths_of_the_paper_include\": \"1) study an overlooked problem of TTA; 2) the proposed AdaRc is simple yet effective; and 3) detailed theoretical analysis is provided.\\n\\nDespite these, the reviewers also raised several drawbacks and concerns for this paper in the first round, including: 1) practicality in real-world; 2) experiments on real-world graphs, more GNN backbones and multi-layers; and 3) explanations for the gains among different datasets.\\n\\nThe authors have provided a rebuttal. Only one reviewer has attended the discussion-phase and all the reviewers kept their original ratings after rebuttal, i.e., four weak accepts. The AC has carefully checked the comments of the reviewers and the response of the authors. The AC thinks most of the concerns of the reviewers have been solved. Thus, considering the strengths of this paper, the AC thinks this paper could be a valuable work for TTA with graphs and thus recommends acceptance to this paper.\", \"additional_comments_on_reviewer_discussion\": \"The authors have provided a rebuttal. The reviewers did not change their original positive ratings.\"}", "{\"title\": \"Response to Reviewer qcfU (Part 2)\", \"comment\": \"# W4 & Q3. Initial Prediction\\n\\n> W4. The performance of AdaRC's clustering approach may be sensitive to the quality of initial predictions, particularly in cases where pseudo-labeling accuracy is low, potentially affecting the model\\u2019s stability in scenarios with noisy initial data.\\n\\nThank you for raising this concern. We would like to highlight that we addressed this issue in Appendix C.2, where we provided an example demonstrating AdaRC\\u2019s robustness to noisy initial predictions. Specifically:\\n- Before adaptation, the predictions exhibit significant noise, with substantial overlap in the logits of the two classes, reflecting low pseudo-labeling accuracy. \\n- During adaptation, despite the noisy initialization, AdaRC is able to gradually refine the node representations by leveraging the PIC loss, leading to improved class separation and accuracy. \\n\\nThis example illustrates that AdaRC can effectively handle scenarios with noisy initial data, maintaining stability and progressively enhancing model performance.\\n\\n> Q3. Could PIC loss performance be enhanced by integrating more robust initialization methods for pseudo-labeling in low-confidence scenarios?\\n\\nThank you for this great point. Indeed, the performance of PIC loss can be further enhanced with more robust pseudo-labeling methods. More accurate pseudo-labels from BaseTTA lead to gradients that better optimize node representations, resulting in improved performance. In our experiments, stronger BaseTTA methods typically also yield better results when combined with AdaRC.\"}", "{\"title\": \"Response to All Reviewers (Part 2)\", \"comment\": \"# 3. Performance on Evolving Graphs (qcfU, M4v2)\\n\\nWhile our experiments in the paper mainly focused on adapting to a single graph with a stable distribution, we also evaluated AdaRC\\u2019s performance on a stream of graphs with evolving structure shifts. Specifically, we used the Syn-Cora dataset, where the model was pre-trained on a source graph with a homophily of 0.8. It was then sequentially adapted to five target graphs with homophilies of 0.1, 0.7, 0.3, 0.9, and 0.2, simulating continuously changing homophily. The experimental setup, except for the sequence of target graphs, was identical to that in the Syn-Cora experiments described in the main text.\\n\\nFor example, in the column corresponding to the target graph with a homophily of 0.3, we compared two scenarios: (1) static: directly adapting from 0.8 \\u2192 0.3, and (2) evolving: sequentially adapting through 0.8 \\u2192 0.1 \\u2192 0.7 \\u2192 0.3. \\n\\nTable A. Accuracy of AdaRC on static target graph and evolving target graph (mean \\u00b1 s.d.)\\n| Method | Setting | Target graph (h = 0.1) | Target graph (h = 0.7) | Target graph (h = 0.3) | Target graph (h = 0.9) | Target graph (h = 0.2) |\\n|--------|----------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|\\n| ERM | Static | 63.19 \\u00b1 1.28 | 88.88 \\u00b1 0.61 | 71.46 \\u00b1 0.62 | 97.15 \\u00b1 0.32 | 65.67 \\u00b1 0.35 |\\n| AdaRC | Static | 79.75 \\u00b1 1.04 | 90.57 \\u00b1 0.47 | 79.68 \\u00b1 0.73 | 97.40 \\u00b1 0.28 | 78.96 \\u00b1 1.08 |\\n| AdaRC | Evolving | 79.75 \\u00b1 1.04 | 90.65 \\u00b1 0.33 | 77.43 \\u00b1 0.62 | 97.31 \\u00b1 0.42 | 78.26 \\u00b1 1.02 |\\n\\nThe results, shown in Table A above, indicate that **AdaRC achieves performance on evolving graphs highly comparable to that on static graphs**. This demonstrates AdaRC\\u2019s ability to handle dynamic scenarios effectively, even under continuously changing graph structures.\"}" ] }
EoTIlDT0Tr
$\mathcal{X}^2$-DFD: A framework for e$\mathcal{X}$plainable and e$\mathcal{X}$tendable Deepfake Detection
[ "Yize Chen", "Zhiyuan Yan", "Siwei Lyu", "Baoyuan Wu" ]
Detecting deepfakes (*i.e.*, AI-generated content with malicious intent) has become an important task. Most existing detection methods provide only real/fake predictions without offering human-comprehensible explanations. Recent studies leveraging multimodal large-language models (MLLMs) for deepfake detection have shown improvements in explainability. However, the performance of pre-trained MLLMs (*e.g.*, LLaVA) remains limited due to a lack of understanding of their capabilities for this task and strategies to enhance them. In this work, we empirically assess the strengths and weaknesses of MLLMs specifically in deepfake detection via forgery-related feature analysis. Building on these assessments, we propose a novel framework called $\mathcal{X}^2$-DFD, consisting of three core modules. The first module, *Model Feature Assessment (MFA)*, measures the detection capabilities of forgery-related features intrinsic to MLLMs, and gives a descending ranking of these features. The second module, *Strong Feature Strengthening (SFS)*, enhances the detection and explanation capabilities by fine-tuning the MLLM on a dataset constructed based on the top-ranked features. The third module, *Weak Feature Supplementing (WFS)*, improves the fine-tuned MLLM's capabilities on lower-ranked features by integrating external dedicated deepfake detectors. To verify the effectiveness of this framework, we further present a practical implementation, where an automated forger-related feature generation, evaluation, and ranking procedure is designed for *MFA* module; an automated generation procedure of the fine-tuning dataset containing real and fake images with explanations based on top-ranked features is developed for *SFS* model; an external conventional deepfake detector focusing on blending artifact, which corresponds to a low detection capability in the pre-trained MLLM, is integrated for *WFS* module. Experimental results show that the proposed implementation enhances overall detection performance compared to pre-trained MLLMs, while providing more convincing explanations. More encouragingly, our framework is designed to be plug-and-play, allowing it to seamlessly integrate with more advanced MLLMs and external detectors, leading to continual improvement and extension to face the challenges of rapidly evolving deepfake technologies.
[ "Deepfake Detection; Multimodal Large Language Models; Media Forensics" ]
Reject
https://openreview.net/pdf?id=EoTIlDT0Tr
https://openreview.net/forum?id=EoTIlDT0Tr
ICLR.cc/2025/Conference
2025
{ "note_id": [ "q2bmD1J6iC", "ofmuIR7jXp", "kkIyKW58i3", "k4Oi3x04My", "gmiPVe6Icy", "eyd8801ekz", "eCNVoQzrfn", "ZDqWf38N8g", "YZcZRHtRD3", "Xeodwmf4Q1", "XVLNClt4ck", "SWMwssNE6a", "RefXifSIgN", "RPVJ1dP9Qg", "PjBNb7obhO", "N3hFM4dhCe", "KWyeRvAOUG", "ERoReVGHJP", "B3qt8A3OeE", "Ade3EKnbSY", "0L0dih81hk" ], "note_type": [ "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision" ], "note_created": [ 1730699823279, 1734427813897, 1732530923806, 1732530706724, 1732331472709, 1732331298069, 1733035886171, 1732531073981, 1732330634874, 1730534605328, 1732352021437, 1732328815602, 1730421350773, 1732352107902, 1732530805839, 1732349871439, 1732350589950, 1730652615820, 1732328659521, 1732349786685, 1737523538497 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2882/Reviewer_nfD6" ], [ "ICLR.cc/2025/Conference/Submission2882/Area_Chair_MNFA" ], [ "ICLR.cc/2025/Conference/Submission2882/Authors" ], [ "ICLR.cc/2025/Conference/Submission2882/Authors" ], [ "ICLR.cc/2025/Conference/Submission2882/Authors" ], [ "ICLR.cc/2025/Conference/Submission2882/Authors" ], [ "ICLR.cc/2025/Conference/Submission2882/Reviewer_FRF6" ], [ "ICLR.cc/2025/Conference/Submission2882/Authors" ], [ "ICLR.cc/2025/Conference/Submission2882/Authors" ], [ "ICLR.cc/2025/Conference/Submission2882/Reviewer_Umwy" ], [ "ICLR.cc/2025/Conference/Submission2882/Authors" ], [ "ICLR.cc/2025/Conference/Submission2882/Authors" ], [ "ICLR.cc/2025/Conference/Submission2882/Reviewer_FRF6" ], [ "ICLR.cc/2025/Conference/Submission2882/Authors" ], [ "ICLR.cc/2025/Conference/Submission2882/Authors" ], [ "ICLR.cc/2025/Conference/Submission2882/Authors" ], [ "ICLR.cc/2025/Conference/Submission2882/Authors" ], [ "ICLR.cc/2025/Conference/Submission2882/Reviewer_37wm" ], [ "ICLR.cc/2025/Conference/Submission2882/Authors" ], [ "ICLR.cc/2025/Conference/Submission2882/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a new method for deepfake detection based on MLLMs. The method enhances the deepfake detection capabilities of MLLMs by ranking forgery-related features, strengthening strong features and augmenting weak features. In addition, the paper raises the explainability of MLLMs' inference through fine-tuning. Evaluations on multiple datasets prove the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe paper proposes Model Feature Assessment(MFA), Strong Feature Strengthening (SFS) and Weak Feature Supplementing (WFS) to enhance the detection capabilities of MLLMs, contributing to MLLM-based algorithms on deepfake detection.\\n\\n2.\\tThe paper points out the limitations of pretrained MLLMs and introduces a pipeline for constructing datasets based on more effective forgery features.\\n\\n3.\\tThe experiments provide different protocols and a wide range of evaluation to prove the effectiveness of the method.\", \"weaknesses\": \"1.\\tThe authors claim that human-annotated VQA data (zhang et al, 2024) may not be ideal as standard answers for fine-tuning MLLMs. Instead, they propose using MLLM-generated data to fine-tune MLLMs. This raises an important question: if all answers are expressed in human's natural language, how can MLLM-generated data establish a more 'standard' answer? Even in the rebuttal response, the author claims their collection method is efficient instead of being a \\\"standard\\\" answer.\\n\\n2.\\tThe generated questions lack reliability due to the absence of a human verification process to ensure the accuracy of these fake features. Additionally, the authors use data generated by MLLMs to evaluate the MLLMs themselves, which raises concerns about the validity of this approach. This circular logic is confusing and calls into question the robustness of the evaluation.\\n\\n3.\\tThe authors claim to study the intrinsic capabilities of MLLMs on detection. However, Strong Feature Strengthening (SFS) and Weak Feature Supplementing (WFS) seem to be a process of dataset construction based on more effective features rated by the ranking in Sec. 3.3, thus becoming less pervasive in how it improves the explainability of MLLMs. Although the rebuttal response emphasizes its effectiveness, I am still not convinced by it. For example, in Fig. 5, what if the blending score is wrong? what if the face is not generated by the blending process so the blending score is low while the face is still fake?\\n\\n4.\\tThe explanation ability of X^2-DFD should also be quantitatively evaluated in the main paper. The authors mention conducting a human study to assess explanation performance in Fig. 7, but they only verbally claim that their model is superior and provide a single qualitative example in the appendix. This lacks reliability and ignores the main contribution of explainability.\\n\\n5. A new question after the rebuttal: Line 176, LLaVA and XceptionNet achieve really unreliable detection performance, e.g.., 63.7% and 75.8%; this means both of them are almost equally incapable in the detection task, and analyzing them is less meaningful.\", \"questions\": \"1.\\tIn Figure 1, I wonder why X^2-DFD predicts a lower fake probability on the second picture compared with a pre-trained MLLM. Meanwhile, the response of X^2-DFD considers the image completely fake, which is contrary to the predicted probability.\\n\\n2.\\tFigure 2 is too small; the numbers are barely visible.\\n\\n3.\\tThere are undefined and inconsistent expressions. In Sec. 4.2.3 and Figure 5, The abbreviation of WCS is undefined in the aforementioned method. In Sec. 5.4 and Tab. 3, GCS has never been seen before.\\n\\n4.\\tMore details on the implementation of fine-tuning MLLMs should be provided. Which part of the parameters are trained? The dataset information should be detailed.\\n\\n[Summary]: I appreciate the author's efforts in the rebuttal, while I still would like to maintain my first-round score and suggest this paper revised for the next submission.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a MLLM-based method to detect face forgery. All reviewers gave positive recognitions to adopting MLLMs for explainable deepfake detection. However, part of the reviewers still had concerns about experiments, opinions and writing. I have reviewed this revised paper and had some concerns that were consistent with reviewers.\\n\\nSome necessary experiments or analyses are missing to support the authors' views or conclusions. For example, the performances on the low-quality images are not evaluated (e.g., using the model trained on the RAW version of FF++ to test the FF++ with the C23 or C40 version), the discussion on whether LLM would misunderstand normal post-processing traces (e.g., blur or contrast) as forgery traces is not provided, specific experimental results are lacking to demonstrate that applying SFS to weak features is ineffective and that the fine-tuning results are similar to the No SFS baseline, sufficient quantitative results are missing to support the claimed enhancements to explanation of the pre-trained MLLM.\\n\\nIn addition, given that the large gap between the Phi-3-vision and the LLaVa-7B model, the stated view that the framework does not rely on a specific MLLM is not sufficiently convincing. According to this view, the proposed method should be able to achieve similar detection performance under different MLLMs. Besides, some obvious writing issues remain in the revised version, e.g., missing citations in L916 and L1115. Based on the above considerations, I do not recommend accepting this paper.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided rebuttals for each reviewer. Reviewer FRF6 provided a response but still had concerns about the technical depth and contribution. Considering that other reviewers did not provide further responses and that the rating scores varied widely, I reviewed this revised paper and had concerns about experiments, opinions and writing, which were also raised by reviewer nfD6, 37wm, Umwy and FRF6.\"}", "{\"title\": \"Appreciation for Review and Request for Feedback\", \"comment\": \"Dear Reviewer Umwy,\\n\\nWe want to convey our sincere appreciation for the valuable insights and suggestions you provided regarding our work.\\n\\nWe have made efforts to address the concerns and queries you raised during the rebuttal process. It would be immensely helpful to receive feedback on whether our response effectively alleviated any doubts you may have had. Your feedback is crucial to enhancing the quality of our work.\\n\\nRecognizing the demands of your busy schedule, we genuinely appreciate your contribution to the refinement of our manuscript. As the end of the rebuttal period is approaching, we eagerly await your reply before the end.\\n\\nOnce again, thank you for your time and consideration.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Appreciation for Review and Request for Feedback\", \"comment\": \"Dear Reviewer nfD6,\\n\\nWe want to convey our sincere appreciation for the valuable insights and suggestions you provided regarding our work.\\n\\nWe have made efforts to address the concerns and queries you raised during the rebuttal process. It would be immensely helpful to receive feedback on whether our response effectively alleviated any doubts you may have had. Your feedback is crucial to enhancing the quality of our work.\\n\\nRecognizing the demands of your busy schedule, we genuinely appreciate your contribution to the refinement of our manuscript. As the end of the rebuttal period is approaching, we eagerly await your reply before the end.\\n\\nOnce again, thank you for your time and consideration.\\n\\nBest regards,\\n\\n Authors\"}", "{\"title\": \"Response to Reviewer 37wm (Part 2/2)\", \"comment\": \"**Q4. Authors only select the specific LLM and MLLM models in the framework. How to ensure the generality and reasonability of the conclusion? Only the LLaVa model is chosen as the typical MLLM. Do most MLLMs contain similar properties? The author should point this out.**\\n\\n**A4:** Thank you for your valuable question. We initially select LLaVA and GPT-4 due to their popularity and widespread adoption in various applications.\\n\\nTo address your concern and ensure the generality of our framework, we expand our experiments as follows:\\n\\n- **For question generation (MFA):** We employ a different LLM, **Claude 3.5-Sonnet**, to generate the question list.\\n- **For fine-tuning (SFS and WFS):**\\n 1. We test different model sizes of the same MLLM, such as **LLaVA-7B** and **LLaVA-13B**.\\n 2. We evaluate other MLLMs, including **Phi-3-Vision**.\\n\\nThese additional experiments, detail in **Appendix 5**, confirm that our framework is not dependent on specific LLMs or MLLMs. Furthermore, the results indicate that advancements in MLLMs are likely to enhance performance even further.\\n\\n**Table 3 Experiments on different LLMs/MLLMs.**\\n\\n|Variant|CDF-v2|DFDCP|DFDC|DFD|Uniface|e4s|Facedancer|FSGAN|Inswap|Simswap|Avg|\\n-|-|-|-|-|-|-|-|-|-|-|-\\nGPT4o + Phi-3-vision|88.6|87.1|83.5|90.9|81.8|77.5|78.8|85.7|77.5|80.6|83.2\\nGPT4o + LLaVa-7B|90.3|89.7|83.5|92.5|85.2|91.2|83.8|89.9|78.5|84.9|87.0\\nClaude3.5-Sonnet + LLaVa-7B|88.8|88.5|82.6|92.7|84.6|90.1|83.8|89.7|79.5|85.6|86.6\\nGPT4o + LLaVa-13B|91.3|90.3|83.4|92.5|86.0|92.5|84.5|91.0|80.6|85.4|87.8\\n\\nWe appreciate your insightful feedback, which has allowed us to strengthen the comprehensiveness and generality of our work.\\n\\n**Q5. How can the commonality of the definition of forgery features be ensured? Authors utilized the GPT-4o to generate a questions list, how about other LLM models?**\\n\\n**A5:** Thank you for your thoughtful question. We address the commonality of forgery features and the performance of other LLMs as follows:\\n\\n- **Commonality of Forgery Features**\\n 1. **Question Reliability via Prompt Design:**\\n We guide LLMs using a carefully designed prompt (Figure 4: FMA Question Generation) to ensure that generated questions are:\\n - **Clear:** Focused on forgery-related features.\\n - **Diverse:** Avoiding redundancy and ambiguity.\\n - **Broad Coverage:** Adjusted to ensure a wide range of forgery features.\\n 2. **Support for Human Verification:**\\n Human verification is supported to ensure reliability in our framework. Most questions generated by the LLMs are accurate and meet requirements without errors, as demonstrated in **Table 13** and **Table 14**.\\n\\n- **Performance of Other LLM Models:**\\n Other LLMs, such as Claude 3.5-Sonnet, also generate diverse and reliable questions (**Table 15**, **Table 16**) and contribute to performance improvements within our framework (**Table 6**).\\n\\nThank you again for your valuable feedback, which has helped us enhance our work.\\n\\n**Q6. In TABLE 2. Why the proposed method is inferior to comparison methods with fsgan and inswap generation models? Please give a deeper theoretical analysis.**\\n\\n**A6:** Thank you for your insightful question. While the performance of the proposed method is slightly inferior to the other baselines on FS-GAN and InSwap-generated fake data, the results are still quite close. Moreover, we would like to point out that traditional methods exhibit a larger variance in their performance, whereas our method shows robust performance and lower variance on different forgeries and largely outperforms other traditional approaches by 7% points in terms of average AUC performance.\\n\\nIt is possible that traditional methods focus more on specific types of forgeries (more like \\\"specialist\\\"), leading them to perform well on certain forgery patterns but not generalize as effectively, while our method is more robust and can generalize across a broader range of forgeries (like \\\"generalist\\\"). Our framework provides an effective strategy for combining MLLMs and conventional approaches, thereby demonstrating superior results.\\n\\n**Q7. In TABLE 3, the GCS concept was not explained clearly, and the ablation study settings are somewhat confusing.**\\n\\n**A7:** Thank you for pointing this out.\\n\\n- **WCS** refers to **Weak Capability Strengthening**, aligned with **Weak Feature Strengthening (WFS)**.\\n- **GCS** refers to **Good Capability Strengthening**, aligned with **Strong Feature Strengthening (SFS)**.\\n\\nWe have updated these terms for clarity and revised **Lines 478-479** and **492-493** in **Section 5.4** of the manuscript.\\n\\n**Q8. To ensure the integrity of the experiment, authors also could provide in-domain results in the FF++ dataset.**\\n\\n**A8:** Thank you for your suggestion. We have supplemented the in-domain experimental results on the **FF++ dataset**, which can be found in **Appendix 6** of the revised manuscript.\"}", "{\"title\": \"Response to Reviewer 37wm (Part 1/2)\", \"comment\": \"**Q1. It is hard to believe the simple fine-tuning of multi-modal large models typically can achieve such satisfactory results. The public source code can help prove its reproducibility.**\\n\\n**A1:** Thank you for your positive feedback on our work! The effectiveness of our model can be attributed to two key strategies:\\n\\n- **Focusing on Strong Discriminative Forgery Features**: our approach enables the MLLMs to focus more on **strong discriminative forgery-related features** while eliminating distractions from weaker discriminative features. Additionally, after fine-tuning the MLLM with the strong feature, the model's ability to recognize strong features is further improved (as shown in **Figure 6**), enhancing the overall detection performance.\\n\\n- **Integrating with EDD to Supplement the Weak Features**: in addition to fine-tuning the MLLM using its strong forgery-related features, we also consider leveraging the EDD to supplement its \\\"weakness\\\". Specifically, by fine-tuning MLLMs with EDD's outputs, we aim to encourage the MLLMs to effectively leverage **accurate information** from EDD, addressing its weaker capability and further strengthening its overall capabilities.\\n\\nThese strategies work together to ensure the model achieves satisfactory performance. As mentioned in the main text, we are committed to releasing our code for reproducibility **once the paper is accepted**. Our fine-tuning implementation is based on the official **LLaVA** fine-tuning implement, ensuring both **reproducibility and reliability**.\\n\\n**Q2. Directly using the prediction probabilities of the smaller model as text input for the LLM seems unreasonable, as LLMs lack numerical reasoning capabilities.**\\n**A2:** Thank you for your insightful feedback, which allows us to clarify this important aspect of our framework.\\n\\n- **Not Relying on Numerical Reasoning Capabilities**:\\n We agree that large language models generally lack strong numerical reasoning capabilities. However, our approach does *NOT* explicitly rely on the numerical reasoning ability. Instead, our framework is designed to **learn how to effectively *use* the accurate information from EDD as supplementary**. Below, we have conducted experiments to confirm that the MLLM lacks numerical reasoning capabilities but has enough learning capabilities from the numerical data.\\n- **Experimental Verification**:\\n To validate it, we have conducted experiments in *Table 1* (detailed in the revised version **Table 9**) comparing performance with and without training. In the **\\\"no train + infer\\\"** row, where the *Blend Score* is directly added to the prompt without *prior training*, the model struggles to effectively understand and utilize the scores, as it relies solely on its numerical reasoning capabilities.\\n\\n**Table 1 Compare with Numerical Reasoning Capabilities in LLM and our framework.**\\n\\n|Configuration|CDF-v2|DFD|DFDC|DFDCP|Avg|\\n-|-|-|-|-|-\\nno train + infer|0.8171|0.9062|0.7906|0.8134|0.8318\\n**train + infer**|**0.9062**|**0.9232**|**0.8300**|**0.8873**|**0.8867**\\n\\nWe appreciate your thoughtful comments and hope this explanation clarifies our approach.\\n\\n**Q3. Since Pretrained LLMs lack fundamental knowledge about true/false classification, ranking based on relevance generated by Pretrained LLMs may not be meaningful. It may be that random selection of other questions has a similar effect. Can the author provide further analysis?**\\n\\n**A3:** Thank you for your thoughtful feedback. We would like to clarify that the MLLMs have **partial** fundamental knowledge about the deepfake detection tasks, as shown in **Table 13**, where certain features exhibit strong discriminative power. A detailed analysis of their strengths and weaknesses is provided in Section 3.3.\\n\\nTo further validate why the generated questions are meaningful, we provide the following clarifications:\\n\\n- **Utilizing the Strong Discriminative Capabilities of MLLMs for Fine-tuning:** Pretrained MLLMs show potential in strong discriminative features. Fine-tuning amplifies this strength by focusing on these strong discriminative forgery features, helping the model prioritize them while reducing its attention to weaker, less relevant ones, akin to feature selection.\\n\\n- **Random Selection vs. SFS:**\\n - Randomly selecting questions, as suggested, the line **No SFS** simulating random selection in **Table 3**, where not select Strong Feature to construct datasets for finetune. Leading to suboptimal performance.\\n - Furthermore, by focusing on Strong Features, the model is guided to provide reasons based on these highly discriminative features. In contrast, using randomly selected features often results in less credible explanations.\\n\\n**Table 2 Random Selection vs. SFS.**\\n\\n|Method|CDF-v2|DFD|DFDC|Uniface|Avg|\\n-|-|-|-|-|-\\nRandom(no SFS)|79.0|88.9|77.8|82.3|82.0\\nSelected (SFS)|83.2|91.4|82.0|84.5|85.3\\n\\nThank you once again for your valuable comments and suggestions, which have greatly helped us refine and improve our work.\"}", "{\"title\": \"Keep My Rating\", \"comment\": \"Thanks for your detailed response, which addresses partial concerns. In consideration of the lack of technical depth and contribution, I still keep my original rating. Thanks.\"}", "{\"title\": \"Appreciation for Review and Request for Feedback\", \"comment\": \"Dear Reviewer FRF6,\\n\\nWe want to convey our sincere appreciation for the valuable insights and suggestions you provided regarding our work.\\n\\nWe have made efforts to address the concerns and queries you raised during the rebuttal process. It would be immensely helpful to receive feedback on whether our response effectively alleviated any doubts you may have had. Your feedback is crucial to enhancing the quality of our work.\\n\\nRecognizing the demands of your busy schedule, we genuinely appreciate your contribution to the refinement of our manuscript. As the end of the rebuttal period is approaching, we eagerly await your reply before the end.\\n\\nOnce again, thank you for your time and consideration.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer Umwy\", \"comment\": \"**Q1 How do you ensure the explainability brought by LLMs is \\\"real\\\" explainability instead of merely providing some texts describing why an image is fake or not?**\\n\\n\\n**A1:** Thank you for this thoughtful question. We completely agree that **LLMs are not inherently transparent or easily interpretable**. This is why we designed the **Model Feature Assessment (MFA)** module to systematically evaluate the discriminative capabilities of MLLMs, enhancing the strong discriminative features and supplementing the weak. Below is the detailed explanation of how we ensure **real explainability**:\\n\\n1. **Assessing Real Discriminative Features:** \\n - The proposed MFA module evaluates the vast capabilities of MLLMs to identify **features that are truly known** and discriminative for distinguishing real and fake content. This ensures that the explanations are grounded in meaningful and relevant features rather than arbitrary text.\\n\\n2. **Enhancing Feature Understanding:** \\n - For strong forgery-related features, we apply **Strong Feature Strengthening (SFS)** to improve the model\\u2019s ability to recognize and explain these features. As shown in Figure 6, **Top 1/3 features see a relative improvement of 28.6%**, demonstrating enhanced capability. \\n\\n3. **Ensuring Accuracy for Weak Features:** \\n - For weak features, we incorporate an external detector (**EDD**) to ensure the accuracy of feature extraction and interpretation. This extension allows the framework to provide reliable and evidence-based explanations for weak features.\\n\\nBy combining these approaches, our framework ensures that the explanations are grounded in real, identifiable features rather than merely generating arbitrary text descriptions. This comprehensive design ensures that **explanations are evidence-based and meaningful**. Thank you again for your question, which allowed us to clarify this critical aspect of our work!\\n\\n\\n**Q2. Based on my first point, how would you evaluate the explainability of the proposed method? There seems to be a lack of metric or ground truth involved in your research. I wonder if human evaluation is fairly enough to evaluate the explainability since human subjects involved in the experiments also have no real knowledge about how the fake data was created and why it is fake.**\\n\\n**A2:** Thank you for raising this valuable question. Here are our responses:\\n\\n1. **Challenges of Ground Truth for Explainability:** \\n The lack of **detailed annotations of specific forgery artifacts** makes it challenging to accurately quantify and evaluate explainability. To our knowledge, there are no existing studies proposing quantitative evaluation metrics for explainability that can be directly applied to our framework.\\n\\n2. **Human Study Design and Reliability:** \\n - While the absence of detailed annotations limits precise quantification, human evaluators offer a practical solution for **quantitative assessment**, as detailed in **Section 5.5** and **Appendix A.4**. \\n - Although human assessments may not be completely objective, our human study is designed to ensure **reliability**: \\n - We included **well-educated participants** who were provided with **clear guidelines** to ensure consistent and reliable results. \\n - Additional details of the human study, including the methodology and guidelines, have been added to **Appendix A.4** and highlighted in **red** (lines 928-933). Due to the double-blind review process, specific details such as **ethical approval documents** will be included in the final version. \\n - We believe this carefully designed human study offers a reliable way to evaluate explainability in the absence of detailed forgery artifact annotations.\\n\\n3. **Integrating Deepfake Prior Knowledge to Improve Explainability:** \\n Thank you for this constructive suggestion. We agree that incorporating **prior knowledge of deepfake** into the framework is a promising direction worth exploring. While we do not currently have a detailed plan, we believe this idea could potentially be integrated into the process between **MFA and SFS** stages, enhancing the model\\u2019s ability to explain its reasoning. We consider this an interesting avenue for future work and plan to explore it further in subsequent studies.\\n\\nWe greatly appreciate your insightful comments, which have helped us refine and improve our framework.\\n\\n\\n\\n**Q3. The extendability of the proposed method should be further explained. Most efforts of the paper are on explainability. I do not clearly see what you mean by \\\"extendable\\\".**\\n\\n**A3:** Thank you for your question. By **\\\"extendable,\\\"** we mean that our framework can seamlessly integrate external dedicated detectors (EDDs) by constructing VQA datasets and training the model to utilize EDD outputs to supplement weak features. A system analysis of this extendability is provided in **Appendix 2**. We have added a **Content Structure of the Appendix** in **Section 5** for clearer guidance.\"}", "{\"summary\": \"The paper proposes X2-DFD, a novel framework that utilizes Multimodal Large Language Models (MLLMs) for explainable and extendable DeepFake Detection. The basic idea of the paper is quite interesting. And the paper seems to the first to systematically assess the inherent capabilities of MLLMs specifically in deepfake detection, which is valuable.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.The paper first systematically assesses the inherent capabilities of MLLMs specifically in deepfake detection, and has found that MLLMs have varying discriminating capabilities on different forgery features.\\n2.It proposes a novel approach to fine-tune the MLLM to make it better adaptive with the deepfake detection task. \\n3.Besides MLLM, it integrates external dedicated detectors (EDDs) to fill the gap where MLLMs show limitations.\", \"weaknesses\": \"Although interesting to see MLLM being applied in improving Deepfake detection performance, I still wonder if the interpretability brought by MLLM is reliable. Therefore I have two major concerns:\\n1.\\tThe explainability of large language models (LLMs) like GPT, BERT, and similar models is a complex and evolving area. Generally, LLMs are not inherently transparent or easily interpretable due to their massive size and the intricacy of their neural architectures. So, how do you ensure the explainability brought by LLMs is \\u201creal\\u201d explainability instead of merely providing some texts describing why an image is fake or not. \\n2.\\tBased on my first point, how would you evaluate the explainability of the proposed method? There seems to be a lack of metric or ground-truth involved in your research. I wonder if human evaluation is fairly enough to evaluate the explainability since human subjects involved in the experiments also have no real knowledge about how the fake data was created and why it is fake. By the way, the details of the human experiment are not provided. \\nIn my opinion, it will be more promising to integrate the knowledge about the deepfake creation and the true difference between real and fake data into the detection stage in order to improve the detector\\u2019s explainability. Nevertheless, I still think it is a good paper which provides a good trial on solving deepfake detection with MLLMs.\", \"questions\": \"The extendability of the proposed method should be further explained. Most efforts of the paper are on the explainability. I do not clearly see what you mean by \\u201cextendable\\u201d.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer FRF6 (Part 4/4)\", \"comment\": \"**Q7. The reason for choosing LLaVA & GPT-4 is unclear. What would happen with different MLLMs, or different parameter configurations (e.g., LLaVA-13B/7B or GPT-3)? X2-DFD\\u2019s sensitivity to various MLLMs remains unexplored. The paper lacks a discussion on how common large models would perform in comparative experiments after fine-tuning, such as the effects of switching to different LLM base models**\\n\\n**R7:** Thank you for raising this important question. Below, we clarify our rationale and address your concerns:\\n\\n1. **Rationale for Selecting GPT-4 and LLaVA:**\\n - We select **GPT-4** and **LLaVA** because they are **representative models** in their respective categories:\\n - **GPT-4** serves as a state-of-the-art **closed-source model**, offering advanced capabilities.\\n - **LLaVA** is a leading **open-source model**, widely adopted in research for its accessibility and versatility.\\n - Our aim was to evaluate how our framework can **adapt to and leverage different MLLMs**, rather than focusing solely on comparing specific models.\\n\\n2. **Additional Experiments:**\\n To further address your concerns, we conduct additional experiments with different MLLMs, parameter configurations, and LLMs for generating questions:\\n - **Different LLMs for Question Generation:** Using various LLMs in MFA consistently demonstrates that our framework performs robustly, irrespective of the LLM used for question generation.\\n - **Different Parameter Configurations:** As the model size increases (e.g., LLaVA-7B to LLaVA-13B), the framework\\u2019s performance improves proportionally, benefiting from the enhanced capabilities of larger MLLMs.\\n - **Different MLLMs for Fine-tuning:** The results indicate that our framework does not rely on a specific MLLM, performing strongly across various MLLMs tested.\\n\\nWe have included additional details and results in **Appendix 5** of the updated manuscript.\\n\\nThank you again for your insightful suggestion. We hope this discussion addresses your concerns and provides further clarity.\\n\\n**Q8. Writing issues: (1) Citation inconsistencies, e.g., the reference in L136 is missing. ArXiv references, especially in L598-601 and L619-621, follow inconsistent formats. (2) Missing percentage symbols in metrics reported on L175-180. (3) Metric explanations occupy excessive space. (4) Table 4 lacks clarity regarding which metrics are presented.**\\n\\n**A8** We have addressed the writing issues as follows:\\n\\n1. Added the missing reference in L136, ensured consistent formatting for ArXiv references (L598\\u2013601, L619\\u2013621), included missing percentage symbols (L175\\u2013180), and clarified in Appendix 2 that Table 4 uses AUC as the evaluation metric.\\n\\n2. Condensed the metric explanations to improve brevity while retaining clarity.\\n\\nThese revisions enhance the manuscript\\u2019s readability and precision.\\n\\n**Q9. During the reasoning process, the paper utilizes an external reasoning engine for analysis. However, if the results from this engine are subpar, it could significantly impact the model's performance. Unfortunately, the paper does not discuss this in detail\\uff08Especially, the training details of external expert models are not mentioned\\uff09. Additionally, in Table 3's ablation study, the minimal performance difference between the EDD-only and final models suggests limited gains. Furthermore, using an untrained MLLM as a baseline for comparison seems unreasonable.**\\n\\n**R9:** Thank you for your comments. We would like to address your concerns as follows:\\n\\n1. **Impact of EDD and Analysis in A.2:**\\n - We acknowledge the potential impact of EDD on the overall performance. To address this, we conduct a detailed study in **Appendix A.2: Feature Supplementing Analysis**, where we experiment with multiple EDD models and derive **three criteria**\\uff08line 813-819\\uff09for selecting suitable EDDs.\\n - While EDD (e.g., CDFA) demonstrates strong standalone performance, the **average performance improvement from 75.9% to 85.6% (a 9.7% absolute gain)** in **Table 2** highlights the significant value of integrating EDD with our model. This improvement underscores the complementary role of our framework in leveraging EDD outputs for better overall results generalization and robustness.\\n\\n2. **Training Details of External Expert Models:**\\n - The training details of the EDD models are provided in **Appendix A.2** (lines 821\\u2013825, 830\\u2013833, and 862\\u2013863).\\n - Additionally, the training process for the WFS module (which is responsible for learning how to integrate EDD into the framework) is detailed in **Section 4.2.4** (lines 362\\u2013392).\\n\\n3. **Regarding Pre-trained MLLM as a Baseline:**\\n - The pre-trained MLLM serves as the **foundation** of our framework, and all improvements are built upon it. While this is the most basic module, we also provide additional baselines such as **No SFS**, which can be viewed as another comparison point for performance without specific enhancements.\\n\\nThank you again for your suggestions!\"}", "{\"title\": \"Response to Reviewer nfD6 (Part 2/2)\", \"comment\": [\"**Q4. The explanation ability of X^2-DFD should also be quantitatively evaluated in the main paper. The authors mention conducting a human study to assess explanation performance in Fig. 7, but they only verbally claim that their model is superior and provide a single qualitative example in the appendix.**\", \"**A4:** Thank you for your thoughtful feedback. Below, we clarify and address your concerns:\", \"1. **Challenges in Quantitatively Evaluating the Explainability in the Field:**\", \"We would like to clarify that there is a lack of **detailed annotations of specific forgery artifacts**, making it difficult to accurately quantify and evaluate explainability. To our knowledge, we have not found any existing studies that propose such quantitive evaluation metrics for explainability that can be directly used in our framework. If we have overlooked any related work, we kindly ask the reviewer to remind us and point it out.\", \"2. **Reliability of Our Human Study:**\", \"The lack of detailed annotations limits the ability to accurately quantify explainability, but human evaluators can also offer a practical solution for **quantitive assessment**, as detailed in **Section 5.5** and **Appendix A.4**:\", \"Although the human assessment could not be completely objective, our human study can still be **reliable** for evaluation, as follows:\", \"Our human study included **well-educated participants** who were provided with **clear guidelines** before the experiment to ensure they could deliver reliable results.\", \"We have added additional experimental details about the human study, highlighted in **red** in **Appendix A.4: Human Study Details and Analysis**(lines 928-933). Due to the double-blind review process, more specific details, including **ethical approval documents with official stamps**, will be included in the final version.\", \"We believe these considerations and designs can provide a reliable quantitive assessment of the explainability, though there is no accurate and detailed annotations of specific forgery artifacts available.\", \"Thank you once again for your valuable comments, which have been instrumental in enhancing the clarity and rigor of our work.\", \"**Q5: The issues in the paper might cause misunderstanding :\", \"1. Figure 1, a lower fake probability on the second picture compared with a pre-trained MLLM.\", \"2. Figure 2 is too small, the numbers are barely visible.\", \"3. Undefined and inconsistent expressions. WCS and GCS in paper**\", \"**A5:** Thank you for carefully reviewing our paper and pointing out details that may cause misunderstanding. We have revised all the mentioned content in the updated manuscript to address these issues:\", \"**Figure 1**: Update **probability** from `0.37` to `0.96` for improved accuracy. (lines 62-63)\", \"**Figure 2**: Enlarge the font to ensure the numbers are clearly visible. (lines 184-187)\", \"**Section 4.2.3 and Figure 5**: (lines 358-366/478-479/492-493)\", \"Unifiy **WCS (Weak Capability Strengthening)** to **WFS (Weak Feature Strengthening)**.\", \"Unifiy **GCS (Good Capability Strengthening)** to **SFS (Strong Feature Strengthening)**.\", \"These terms were standardized to ensure consistency and improve readability.\", \"Thank you once again for your valuable suggestions, which have greatly helped us enhance the clarity of our work.\", \"**Q6. More details on the implementation of fine-tuning MLLMs should be provided. Which part of the parameters are trained? The dataset information should be detailed.**\", \"**R6:** Thank you for requesting additional details on our fine-tuning approach. Below, we provide clarification on the fine-tuning methodology and dataset information:\", \"**Fine-Tuning Methodology**:\", \"The fine-tuning of MLLMs is conduct in **Section 4.2.2 Step 3: MLLM Fine-Tuning**. Specifically, we modify the **projector module** and apply LoRA fine-tuning on the **language model**.\", \"**Dataset Information**:\", \"**Image Data**: In addition to the datasets mentioned in **Section 5 Experimental Settings**, we adhere to the standard deepfake benchmark (**DeepfakeBench [1]**) for processing the original data. Face alignment and face cropping procedures are consistent with DeepfakeBench, and we use **eight frames per video** for model training.\", \"**VQA Data**: The VQA dataset is built through **two rounds of dataset construction** (**SFS**, as shown in **Figure 4**, and **WFS**, as shown in **Figure 5**) and one round of **VQA-specific fine-tuning**. Additionally, the annotation for **WFS** in **Figure 5** has been revised to improve clarity and reader comprehension(lines 351-353/358-359).\", \"Thank you once again for your thoughtful feedback, which has helped us refine and clarify our methodology.\"], \"reference\": \"[1] Yan Z, Zhang Y, Yuan X, et al. Deepfakebench: A comprehensive benchmark of deepfake detection. NeurIPS 2023.\"}", "{\"summary\": \"The authors propose a novel framework called X2-DFD, consisting of three core modules (i.e., Model Feature Assessment (MFA), Strong Feature Strengthening (SFS), and Weak Feature Supplementing (WFS)) for explainable deepfake detection. The results on several benchmark datasets demonstrate its effectiveness. However, the overall story and solution is interesting but with too much human workload and tricks while without enough technical contribution.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper adopts MLLMs for explainable deepfake detection. The cross-dataset performance seems good. The structure of the paper is clear.\", \"weaknesses\": \"1. The cross-manipulation performance presented in Table 2 is suboptimal despite using a trainable MLLM and a non-trainable GPT-4. Given the extensive parameters and pre-trained data leveraged, the proposed method cannot achieve SOTA performance. The capability of handling unseen attacks limits its applications in real-world scenarios.\\n\\n2. X2-DFD\\u2019s explainablity is unsatisfactory for some samples, such as the left bottom sample in Figure 1, which outputs vague statements like \\u201c...with an unusual layout and unnatural skin tone...\\u201d. This description is oversimplified. It cannot provide sufficient information to users and offers less explanatory power than traditional methods like Grad-CAM (which can provide heatmaps). Is this a common issue across samples, and how is explainablity assessed (e.g., through user studies)?\\n\\n3. The authors ignore prior work using LLMs for deepfake detection and over-claim their contribution (Lines 100-103)(r1,r2). The summary on Lines 067-075 misrepresents previous work [r1] by claiming\\u201c...fail to provide intuitive and convincing explanations behind predictions.\\u201d\\n\\n[r1] FakeBench: Probing Explainable Fake Image Detection via Large Multimodal Models\\n[r2]Common Sense Reasoning for Deep Fake Detection\\n\\n4. The authors do not present experiments on low-quality images or post-processed images.\\n\\n5. The ablation study is insufficient. What would happen if WFS were applied to Strong Features or SFS to Weak Features? Or if both Strong and Weak Features used the same module?\\n\\n6. The idea of using an external model (EDD in this paper) to support LLMs is not novel.\\n\\n[r3] LISA: Large Language Instructed Segmentation Assistant\\n[r4] AnomalyGPT: Detecting Industrial Anomalies using Large Vision-Language Models\\n\\n7. The reason for choosing LLaVA & GPT-4 is unclear. What would happen with different MLLMs, or different parameter configurations (e.g., LLaVA-13B/7B or GPT-3)? X2-DFD\\u2019s sensitivity to various MLLMs remains unexplored.\\n\\n8. Writing issues:\\n(1) Citation inconsistencies, e.g., the reference in L136 is missing. ArXiv references, especially in L598-601 and L619-621, follow inconsistent formats.\\n(2) Missing percentage symbols in metrics reported on L175-180.\\n(3) Metric explanations occupy excessive space.\\n(4) Table 4 lacks clarity regarding which metrics are presented.\\n\\n\\n9 .During the reasoning process, the paper utilizes an external reasoning engine for analysis. However, if the results from this engine are subpar, it could significantly impact the model's performance. Unfortunately, the paper does not discuss this in detail\\uff08Especially, the training details of external expert models are not mentioned\\uff09. Additionally, in Table 3's ablation study, the minimal performance difference between the EDD only and final models suggests limited gains. Furthermore, using an untrained MLLM as a baseline for comparison seems unreasonable.\\n\\n10. In the strong feature data selection phase, the paper considers only one metric, and similarly, it evaluates performance using just that metric. This lack of additional metrics related to accuracy (ACC) or natural language processing (NLP) interpretability may limit the comprehensiveness of the results.\\n\\n11.The paper lacks a discussion on how common large models would perform in comparative experiments after fine-tuning, such as the effects of switching to different LLM base models, which is worth exploring further.\", \"questions\": \"see weakness for details.\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer FRF6 (Part 4/4 continued)\", \"comment\": \"**Q10. In the strong feature data selection phase, the paper considers only one metric, and similarly, it evaluates performance using just that metric. This lack of additional metrics related to accuracy (ACC) or NLP interpretability may limit the comprehensiveness of the results.**\\n\\n**A10**: Thank you for these comments. We would like to address your concerns as follows:\\n\\n1. **Rationale for the Strong Feature Data Selection Metric:** \\n - The metric used in the strong feature data selection phase is specifically designed to identify **features that are highly discriminative for distinguishing real and fake images** in the training dataset. \\n - handle data imbalance in the training dataset (fake:real=4:1), we compute **balance accuracy (average of real image accuracy and fake image accuracy)** to ensure fair evaluation of features.\\n\\n2. **Relationship to ACC:** \\n\\n - This metric ensures that the selected strong features are not biased toward one class, and when the number of real and fake images is equal, the **balance accuracy** equals the overall ACC.\\n\\n3. **Relevance to the Task vs. NLP Metrics:** \\n - Our focus is on selecting **discriminative features** that effectively separate real and fake images. Metrics such as perplexity or other NLP-related interpretability measures are not directly relevant to our deepfake detection task (a vision task).\\n\\nWe hope this explanation clarifies the reasoning behind our metric choice, its relationship to ACC, and its alignment with the goals of our task. Thank you for your thoughtful suggestion, which helps us better articulate these points in our work.\"}", "{\"title\": \"Appreciation for Review and Request for Feedback\", \"comment\": \"Dear Reviewer 37wm,\\n\\nWe want to convey our sincere appreciation for the valuable insights and suggestions you provided regarding our work.\\n\\nWe have made efforts to address the concerns and queries you raised during the rebuttal process. It would be immensely helpful to receive feedback on whether our response effectively alleviated any doubts you may have had. Your feedback is crucial to enhancing the quality of our work.\\n\\nRecognizing the demands of your busy schedule, we genuinely appreciate your contribution to the refinement of our manuscript. As the end of the rebuttal period is approaching, we eagerly await your reply before the end.\\n\\nOnce again, thank you for your time and consideration.\\n\\nBest regards,\\n\\n Authors\"}", "{\"comment\": \"**Q3. The authors ignore prior work using LLMs for deepfake detection and over-claim their contribution (Lines 100-103)(r1,r2). The summary on Lines 067-075 misrepresents previous work [r1] by claiming\\u201c...fail to provide intuitive and convincing explanations behind predictions.\\u201d\\n[r1] FakeBench: Probing Explainable Fake Image Detection via Large Multimodal Models\\n[r2]Common Sense Reasoning for Deep Fake Detection**\\n\\n\\n**R3:** We appreciate your valuable feedback and the opportunity to clarify and improve our work. Below, we address the concerns raised:\\n\\n1. **Clarification of Scope and Context:** \\n - To avoid any misunderstanding, we would like to clarify that lines **47-75 in the paper** evaluate deepfake detection in the context of **traditional models**. Discussions of **LLM-related work**, including the ones you mentioned ([R1], [R2]), are presented later in the text (lines **125-140**). \\n - As stated in our paper: *\\u201cTo our knowledge, we are the first to **systematically assess the inherent capabilities** of MLLMs specifically in deepfake detection.\\u201d*\\n\\n2. **Relation to Prior Work and Scope Exclusion:** \\n - We acknowledge existing efforts using LLMs for deepfake detection, such as: \\n 1. **FakeBench** [1]: Evaluates **full (natural) image synthesis** using MLLMs. \\n 2. **Common sense reasoning** [2]: Utilizes human-labeled datasets to explore how textual explanations can improve detection performance, generalization, and interpretability. \\n 3. **Can chatgpt detect deepfakes? [3]**: Investigates the capabilities of multimodal large language models for deepfake detection. \\n - Our work provides a distinct contribution by **accessing the inherent capabilities of MLLMs and deeply analyzing** in the specific task of **face forgery detection**. \\n - As noted in the footnote of our paper, our scope is explicitly limited to **face forgery detection** and does not extend to **full-image synthesis** tasks, such as those addressed in [1]. This distinction in focus explains why [1] was not a primary consideration in our discussion. However, we acknowledge its relevance and have revised the manuscript (lines 131\\u2013132) to make this distinction more rigorous.\\n\\n3. **Positioning of Our Contribution:** \\n - While we are not the first to use MLLMs for deepfake detection, we are the first to systematically **assess the inherent capabilities of MLLMs in face forgery detection**. \\n - This analysis provides a significant contribution to understanding the potential of MLLMs and demonstrates the necessity of leveraging **strong features from MLLMs** within our proposed framework. We believe this step is crucial for advancing the field.\\n\\nWe have revised the manuscript for clarity and updated Lines 131-133 to cite FakeBench.\", \"reference\": \"[1] Li Y, Liu X, Wang X, et al. FakeBench: Uncover the Achilles' Heels of Fake Images with Large Multimodal Models. ArXiv 2024.\\n\\n[2] Zhang Y, Colman B, Guo X, et al. Common Sense Reasoning for Deepfake Detection. ECCV 2024.\\n\\n[3] Jia S, Lyu R, Zhao K, et al. Can chatgpt detect deepfakes? a study of using multimodal large language models for media forensics. CVPRW 2024.\\n\\n---\\n**Q4. The authors do not present experiments on low-quality images or post-processed images.**\\n\\n**R4**.Thank you for this valuable suggestion. We would like to clarify that experiments on low-quality and post-processed images, including **Gaussian blur, block-wise distortion, contrast changes, and JPEG compression**, are presented in **Appendix A.3: Evaluation of Robustness Against Unseen Perturbations**. These results demonstrate our model's strong robustness in such scenarios. \\n\\nTo improve clarity, we have added the content in Lines 533-536 of the updated manuscript, helping readers locate the detailed results in the appendix.\", \"title\": \"Response to Reviewer FRF6 (Part 2/4)\"}", "{\"title\": \"Response to Reviewer FRF6 (Part 3/4)\", \"comment\": \"**Q5. The ablation study is insufficient. What would happen if WFS were applied to Strong Features or SFS to Weak Features? Or if both Strong and Weak Features used the same module?**\\n\\n**R5:** Thank you for raising this point. we would like to clarify the roles of Strong Feature Strengthening (SFS) and Weak Feature Supplementation (WFS).\\n\\n1. **Roles of SFS and WFS:** \\n - **SFS** enhances the model's **strong features** by focusing on areas where it already performs well. \\n - **WFS** supplements **weak features** using additional information to improve areas where the model struggles.\\n\\n2. **Impact of Applying SFS to Strong vs. Weak Features:** \\n - **SFS on Strong Features:** This helps the model better leverage its strengths. As shown in **Figure 6**, this method boosted recognition performance for the **Top 1/3 features** by **28.6%**.\\n - **SFS on Weak Features:** Applying SFS to weak features is ineffective because these features lack reliable annotations. Fine-tuning with unreliable data would yield poor performance and lead to untrustworthy outputs, similar to the **No SFS baseline**\\uff08random\\uff09.\\n\\n3. **Exploration of Suggested Configurations:** \\n - We experiment with applying SFS alongside EDD for text annotation. As shown in **Table 2: Performance Comparison**, this approach resulted in only a minor performance gain (+1.19%) and proved less effective compared to our framework.\\n\\n**Table2: Performance Comparison.**\\n\\n| **Module** | **CDF-v2** | **DFDCP** | **DFDC** | **DFD** | **Uniface** | **E4S** | **FaceDancer** | **FSGAN** | **Inswap** | **SimSwap** | **Avg** |\\n| ---------------------- | ---------- | --------- | -------- | ------- | ----------- | ------- | -------------- | --------- | ---------- | ----------- | ------- |\\n| **SFS** | 83.3 | 82.0 | 79.2 | 91.4 | 84.5 | 94.1 | 79.9 | 88.0 | 77.2 | 83.3 | 84.29 |\\n| **EDD Annotate + SFS** | 83.0 | 81.4 | 79.6 | 89.4 | 86.4 | 95.9 | 81.8 | 89.8 | 81.2 | 86.3 | 85.48 |\\n\\n---\\n\\n\\n\\n**Q6. The idea of using an external model (EDD in this paper) to support LLMs is not novel.\\n[r3] LISA: Large Language Instructed Segmentation Assistant \\n[r4] AnomalyGPT: Detecting Industrial Anomalies using Large Vision-Language Models**\\n\\n**R6:** Thank you for pointing out this observation. Below, we clarify the role of EDD in our work and its broader significance:\\n\\n\\n1. **Unique Contribution of Our Approach:** \\n - Thanks for mentioning [r3] and [r4], which are two studies showing the promise of MLLMs in using extern models for enhanced performance, in segmentation and industrial anomaly detection tasks. However, the novelty and ignorable contribution of our work is reflected below:\\n - Utilizing extern models is a task-specific design tailored to the *deepfake detection field*. Given numerous well-performance expert conventional smaller detectors available currently, it is still questionable whether and how these smaller detectors can benefit the MLLMs for a comprehensive and more robust detection, creating \\\"1+1>2\\\" results. To our knowledge, there is a lack of exploration of the specific and effective strategies to address this, making the question still challenging and unexplored. To address this issue, we have proposed a reasonable and effective framework for implementation. For these reasons, our strategy is novel and has a positive contribution to the current field.\\n - Utilizing extern models accurately meets the needs in our case, where we found the MLLMs lack several capabilities in detection based on the **rigorous analysis of the limitations of MLLMs.** Therefore, using extern models in our framework is suitable and can supplement the weak abilities of the original MLLMs.\\n\\n\\n2. **Broader Perspective on Tool Integration:** \\n - On a macro level, the use of external models reflects the **expandability of our framework**, which is inspired by the concept of **tool learning** in LLM. This paradigm has been explored across various fields and represents a **key strength of LLMs**. To some extent, we may also have been **inspired by these advancements** and applied this concept in a way that effectively enhances deepfake detection in our framework.\\n\\nBy integrating EDD, we not only address practical challenges in our task but also demonstrate how such tool-based extensibility can be harnessed in a focused and impactful manner. Thank you for your comments, which have allowed us to articulate this point more clearly!\"}", "{\"summary\": \"In this paper, the authors use a multi-modal large model to enhance the interpretability of face forgery detection. To construct an effective fine-tuning dataset, they propose first ranking various forgery features and then building a corresponding VQA dataset based on the top-ranked features. Additionally, the detection results from smaller models are fed into the LLM as extra guidance.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The recognition effect has been greatly improved, especially for cross-dataset scenarios.\\n2. It is the first exploration to design a novel MLLMs-based framework for explainable forgery detection.\\n3. The experiments are thorough and the comparisons are comprehensive, effectively demonstrating the validity of the proposed method.\\n4. The paper is well-organized and easy to understand.\", \"weaknesses\": \"1. It is hard to believe the simple fine-tuning of multi-modal large models typically can achieve such satisfactory results. The public source code can help prove its reproducibility.\\n2. Directly using the prediction probabilities of the smaller model as text input for the LLM seems unreasonable, as LLMs lack numerical reasoning capabilities.\\n3. Since Pretrained LLMs lack fundamental knowledge about true/false classification, ranking based on relevance generated by Pretrained LLMs may not be meaningful. It may be that random selection of other questions has a similar effect. Can the author provide further analysis?\\n4. Authors only select the specific LLM and MLLM models in the framework. How to ensure the generality and reasonability of the conclusion?\", \"questions\": \"1. Only the LLaVa model is chosen as the typical MLLM. Do most MLLMs contain similar properties? The author should point this out.\\n2. How can the commonality of the definition of forgery features be ensured? Authors utilized the GPT-4o to generate a questions list, how about other LLM models? Different LLM models may generate different question lists to extract forgery-related features.\\n3. In TABLE 2. Why the proposed method is inferior to comparison methods with fsgan and inswap generation models? Please give a deeper theoretical analysis.\\n4. In TABLE 3, the GCS concept was not explained clearly, and the ablation study settings are somewhat confusing.\\n5. To ensure the integrity of the experiment, authors also could provide in-domain results in the FF++ dataset.\", \"flag_for_ethics_review\": \"['Yes, Responsible research practice (e.g., human subjects, data release)']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer nfD6 (Part 1/2)\", \"comment\": \"**Q1. How can MLLM-generated data establish a more 'standard' answer?**\\n\\n**A1:** Thank you for pointing out the ambiguity in our use of \\\"standard.\\\" \\nWe agree with the reviewer that the word could lead to a lack of clarity, so we have eliminated this expression and substituted it with **\\\"efficient\\\"**, which we think to be more appropriate and accurate here.\\nThis is because manually creating detailed annotations by humans can be costly and inefficient, while MLLM-based data generation is **cost-effective, potentially scalable, and faster**, making it an efficient choice for creating larger datasets. \\n\\nThank you again for bringing this to our attention. And we have modified the content in Lines 133-134 of the **updated manuscript.**\\n\\n\\n**Q2. (a) The generated questions lack reliability due to the absence of a human verification process to ensure the accuracy of these fake features. (b) The circular logic is confusing and calls into question the robustness of the evaluation.**\\n\\n**A2:** Thank you for your insightful feedback, which has allowed us to address these important points. Here is our clarification:\\n\\n1. **Question Reliability and Human Verification:** \\n\\n - First, after ranking the generated questions by LLMs, we have already conducted a human verification to ensure the reliability and accuracy of the fake features. However, please note that most questions are generated quite well without any obvious errors or irrelevant information. So we omit the description in the original manuscript. Following your suggestions, we have added this content to Lines 314-316 of the **update manuscript** for better clarity.\\n - Second, we would like to clarify that we have explicitly required the LLMs to generate questions by our designed prompt. So the generated questions by LLMs are expected to be:\\n - **Clear**, focusing on features that indicate forgery. \\n - **Diverse**, avoiding repetition or ambiguity. \\n - **Broad Coverage**: By adjusting the number of generated numbers, we can achieve wider question coverage.\\n\\n As shown in **Table 13 and Table 14**, the generated questions effectively meet our requirements, demonstrating the reliability and accuracy of these features. \\n\\n\\n2. **About Circular Logic:** \\n - We found from *Wikipedia* that \\\"circular logic\\\" is a logical fallacy in which the reasoner **begins with what they are trying to end with**. In our approach, what we begin with (**questions lists** generated by LLMs) and what we end with (**ranked questions lists** after the MFA module) are **distinct**. Therefore, our approach is not circular logic.\\n\\nThank you once again for your valuable comments. We hope our response can address your concerns effectively.\\n\\n\\n**Q3. SFS and WFS seem to be a process of dataset construction based on more effective features rated by the ranking in Sec. 3.3, thus becoming less pervasive in how it improves the explainability of MLLMs. More analyses are expected to verify the contributions of the procedure to the enhancement of MLLMs.**\\n\\n**A3:** Thank you for your valuable feedback. We improve the explainability by leveraging the stronger discriminative features and supplementing the weak using EDDs. Specifically:\\n\\n - **Focusing on Strong Features:** by fine-tuning MLLMs' focus on strong forgery features, we aim to encourage the MLLM to explain using only the strong discriminative features, paying less attention to the weak ones, the MLLM is not \\\"familiar with\\\", much like a feature selection process.\\n\\n - **Supplementing the weak using EDDs:** For those features that MLLMs might not be \\\"good at\\\" learning such as the subtle blending artifacts (ranked at the bottom), we improve the explainability by leveraging the accurate information from EDDs and encouraging MLLMs to learn how to use these information for **credible and reliable explanations.**\\n - **Results Analysis:** \\n - **Feature Recognition Enhancement:** As shown in **Figure 6**, the model demonstrates significant improvements in recognizing strong features, with a **28.6% improvement** for the Top 1/3 features and **14.6% improvement** for the Top 1/2 features. This enhancement in feature recognition has also improved the model's overall explainability, making its explanations more sound and reliable. \\n - **Human Preference Improvement:** **Figure 7** highlights a notable increase in **human preference ratings** for the model's explanations.\\n\\nThank you once again for your insightful comments.\"}", "{\"title\": \"Response to Reviewer FRF6 (Part 1/4)\", \"comment\": \"**Q1. Given the extensive parameters and pre-trained data leveraged, the proposed method cannot achieve SOTA performance. The capability of handling unseen attacks limits its applications in real-world scenarios.**\\n\\n**R1:** Thank you for your comments; we would like to clarify that **our approach has already achieved SOTA performance in handling unseen attacks.** \\n - Results in **Table 1 (Cross-Dataset)** of the manuscript demonstrate that our method achieves SOTA performance across *all* datasets, with an **average improvement of 7.5%**.\\n - In **Table 2 (Cross-Manipulation)** of the manuscript, our method still achieves the best performance on three datasets and second-best on the remaining two, with an **average improvement of 6.9%**. This indicates robust generalization of our method across unseen manipulations.\\n - Furthermore, compare to the conventional detectors such as the second-best ProgressiveDet (2024 NIPS) in **Table1: Performance Comparison.** below, our method demonstrates **lower result variance and greater stability**. Stability under unseen scenarios is critical for real-world applications, and our results highlight this strength.\\n - Given these reasons, we believe that the statement *\\\"The capability of handling unseen attacks limits its applications in real-world scenarios\\\"* might not be accurate and fair. In contrast, the results reveal that our approach achieves the generally best generalization performance for both cross-dataset and cross-manipulation evaluations, as well as the lowest result variance and best stability.\\n\\n**Table1: Performance Comparison.**\\n\\n|Method|Venues|uniface|e4s|facedancer|fsgan|inswap|simswap|Avg|\\n-|-|-|-|-|-|-|-|-\\nCDFA|ECCV 2024|76.5|67.4|75.4|84.8|72.0|76.1|75.9\\nProgressiveDet|NeurIPS 2024|84.5|71.0|73.6|86.5|78.8|77.8|78.7\\nOurs (*w/o CDFA*)|-|84.5|94.1|79.9|88.0|77.2|83.3|84.5\\nOurs|-|85.2|91.2|83.8|89.9|78.4|84.9|85.6\\n\\nIn summary, our method demonstrates strong and stable performance across various scenarios, including unseen attacks.\", \"reference\": \"[1] Yan Z, Yao T, Chen S, et al. Df40: Toward next-generation deepfake detection. NeurIPS 2024.\\n\\n---\\n**Q2. X2-DFD\\u2019s explainability is oversimplified, which cannot provide sufficient information to users and offers less explanatory power than traditional methods like Grad-CAM (which can provide heatmaps). Is this a common issue across samples, and how is explainability assessed (e.g., through user studies)?**\", \"here_is_the_response_with_key_points_emphasized_in_bold\": [\"**R2:** Thank you for raising this important question about explainability. We would like to clarify the reasons for the improved explainability of our method from the following perspectives.\", \"1. **Underlying Reasons for the Improvement of MLLM's Explainability by Our Approach:**\", \"The explanations provided by X2-DFD are derived from the **inherent strong feature of MLLMs**, ensuring the output explanations are **more reliable**, as we only leverage the features that (1) the model inherently understands and (2) have enough discrimination for explaining deepfakes. As a result, using unfamiliar features or less discriminative features for explanation) is largely minimized, thereby improving the reliability of the model's explainability.\", \"2. **Detailed Comparison of Grad-CAM and Ours in Explainability:**\", \"Although many previous conventional detectors use Grad-CAM for demonstrating the model's explainability, the visualization can only provide a **rough hint**, *e.g.,* highlighting the whole face region, but fails to further point out and explain the more fine-grained and detailed artifacts within the located region.\", \"In contrast to Grad-CAM, our approach is able to provide **more fine-grained explanation** about the specific forgery artifacts such as unnatural facial layout, blurry eyes, etc. Furthermore, we also leverage conventional detectors (EDDs) to further supplement the explainability of weak features.\", \"3. **Explainability Assessment through Human/User Studies:**\", \"To assess and compare the explainability of our approach with other MLLMs, **we have conducted a human study** involving *well-educated participants* who were provided with *clear guidelines*. Participants were asked to evaluate and compare explanations generated by different MLLMs to determine which they found most informative. We believe this assessment can provide a reasonable illustration to show that our approach has better explainability to humans.\", \"Additionally, our human-study experiment has undergone **institutional review board (IRB) approval**, adhering to ethical standards. Relevant documentation, including ethical review details and approvals, will be provided post-blind review, along with additional experimental details. We believe this transparency may further address and alleviate concerns regarding the Flag for Ethics Review.\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
EoPsCAEYae
HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models
[ "Wenqiao Zhang", "Tianwei Lin", "Jiang Liu", "Haoyuan Li", "Fangxun Shu", "Wanggui He", "Zhelun Yu", "Lei Zhang", "Zheqi Lv", "Hao Jiang", "Juncheng Li", "Siliang Tang", "Yueting Zhuang" ]
Recent advancements indicate that scaling up Multimodal Large Language Models (MLLMs) effectively enhances performance on downstream multimodal tasks. The prevailing MLLM paradigm, \emph{e.g.}, LLaVA, transforms visual features into text-like tokens using a \emph{static} vision-language mapper, thereby enabling \emph{static} LLMs to develop the capability to comprehend visual information through visual instruction tuning. Unfortunately, the \emph{static} paradigm shares the same parameters to underly multi-task instruction tuning, inevitably introducing the potential \emph{task interference} or \emph{negative transfer}, \emph{i.e.}, where an improvement in the performance of one task reduces the performance of other tasks. In light of this, we introduce \textbf{HyperLLaVA}, which in conjunction with a dynamic visual expert and language expert, respectively adjusts the parameters of the projector and LLM layers conditioned on diverse instruction semantics, thereby minimizing the task interference. These experts are derived from HyperNetworks, which adaptively generates dynamic parameter shifts through visual and language guidance, enabling dynamic vision-language alignment and instruction tuning in two-stage training. To deeply study the multi-task interference of MLLM, we build the \textbf{Comprehensive Multimodal Task benchmark} (\texttt{CMT}), a comprehensive benchmark for the evaluation of multidimensional multimodal tasks. The experiments demonstrate that the superiority of the dynamic tuning paradigm for multi-task instruction following on \texttt{CMT} and general MLLM benchmarks. Our project is available at \href{https://anonymous.4open.science/r/HyperLLaVA-D58E}{https://anonymous.4open.science/r/HyperLLaVA-D58E}.
[ "Multimodal Large Language Model" ]
https://openreview.net/pdf?id=EoPsCAEYae
https://openreview.net/forum?id=EoPsCAEYae
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z4YHsMHaOt", "tkkOXnbAdn", "EJ20faNMlC", "BjeLVjDgps", "APHlTBEUV7", "A8O7Mlfufs" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1730562305336, 1731657003801, 1730635951914, 1730472224164, 1730275095422, 1730258362410 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5708/Reviewer_dfxV" ], [ "ICLR.cc/2025/Conference/Submission5708/Authors" ], [ "ICLR.cc/2025/Conference/Submission5708/Reviewer_HzoZ" ], [ "ICLR.cc/2025/Conference/Submission5708/Reviewer_bQ3m" ], [ "ICLR.cc/2025/Conference/Submission5708/Reviewer_dSaL" ], [ "ICLR.cc/2025/Conference/Submission5708/Reviewer_H2so" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes HyperLLaVA, which generates dynamic parameter shifts through visual and language guidance, enabling\\ndynamic vision-language alignment and instruction tuning in two-stage training. Experiments validate its effectiveness on some benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The writing is good and easy for understanding.\\n2. Experiments are extensive to validate performance gains of the proposed method on some benchmarks.\", \"weaknesses\": \"1. MoE is not new in LLM and MLLM, which can also be a dynamic way for task transferring. Why not comparing with these methods, such as MoE-LLaVA, LLaVA-MoLE.\\n2. Performance gains seem minor from experiments. In Tab4, performance gains on SQA and MMB are almost ignoble.\\n3. Line 300-301, why authors claim that CMT is zero-shot, since it\\u2019s actually a supervised setting. \\n4. I also suspect the value of CMT benchmark, which is simply combined with existing datasets. Actually, existing benchmarks like MMVet and MMBench contain more complex and diverse multi-modal instructions, which are strictly zero-shot.\\n5. All figures could be more clear for reading. It\\u2019s very difficult to understand the method from figures.\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper addresses potential issues of task interference or negative transfer that may arise in the multi-task instruction tuning stage of MLLMs SFT training. Based on the concept of HyperNetworks, the authors propose dynamic visual and language experts that can adjust the corresponding projectors and LLM weights based on the semantics of the instruction. This approach aims to balance the potential negative effects of different tasks on each other during multi-task instruction tuning stage. Additionally, the authors introduce a new benchmark to comprehensively evaluate various aspects of MLLM capabilities. The effectiveness of their method is validated on both existing benchmarks and the new benchmark they proposed.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper is clearly written and easy to follow\", \"This paper has conducted detailed experiments that show how each part works and validated that the overall method is effective.\"], \"weaknesses\": [\"**Limited Scalability of the Method**: According to Table 1 in the paper, as the model size increases from 7B to 13B, the improvements provided by the proposed method over the baseline decrease, with most accuracy gains staying within a range of 0.1 to 1.0. This suggests that the method may lack scalability.\", \"**Limited Innovation of the CMT Benchmark**: The CMT Benchmark proposed by the authors seems to primarily integrate existing open-source benchmarks, lacking significant innovation compared to other established benchmarks. Additionally, the models tested on this benchmark are quite limited, and no evidence is provided to show that it offers any insightful conclusions.\", \"**More Ablation Study on Each Component**: Tables 3 and 4 focus on validating the effectiveness of the different experts proposed by the authors. In some benchmarks, such as SQA, SEED, and MMB, there are hardly any significant improvements. The authors seem to lack an ablation study to evaluate the effectiveness of combining these two modules. Additionally, Table 5 indicates that the performance improvements of the proposed method are quite limited.\", \"**Some Content Is Unclear**: The authors should specify the number of parameters introduced by their method. In a supervised training setting, it is difficult to determine whether the performance gains are due to the increase in parameters rather than the method itself.\"], \"questions\": \"Please see the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Recent advancements in scaling Multimodal Large Language Models (MLLMs) show improved performance on multimodal tasks, though static visual-language mappers like those in LLaVA limit potential across various tasks. To overcome this, This paper with HyperLLaVA introduces adaptive adjustments to both the projector and language model parameters through dynamic visual and language experts. These experts, generated by a hypernetwork, produce adaptive parameter offsets that respond to visual and language cues. This dynamic modeling is achieved through a two-stage training process. Experiments confirm that HyperLLaVA significantly outperforms LLaVA on multiple MLLM benchmarks, addressing limitations of static model adaptation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The writing is fluent. The paper effectively identifies two challenges in applying hypernetworks to MLLM: weak correlation and unstable optimization.\", \"weaknesses\": \"The paper discusses the issue of unstable optimization in large language models and suggests a potential trade-off between the number of parameters generated by hyperparameters and optimization stability. However, the authors have only considered a minimal number of additional parameters (it would be helpful if the paper specifies what percentage these parameters represent of the original network). For instance, the addition of the visual expert is limited to the projector, which seems similar to adding a joint fine-tuning LoRA. The actual performance enhancement from the visual expert is unconvincing since it is only integrated at the input layer, resembling an enhanced connector, while many MLLM studies have shown that connectors contribute minimally to model performance.\\n\\nFurthermore, in the language expert, the use of half-layer language guidance as input for the hypernetwork is quite unusual. Using the sample x itself, particularly the hidden layer embedding of its own sample, is rare (relevant literature would be appreciated if available). This self-generated network approach might suggest that it merely adds an extra structure rather than generating effective parameters, akin to LoRA, requiring many additional tuning parameters. It is unclear how the choice of using half of the layers was made. Will generating parameters for the second half of the network from the first half's output create a highly coupled network structure? Is there any pruning of structures like the hypernetwork during inference?\\n\\nLastly, the paper lacks clear motivation for the method used, and it is not clear what exactly constitutes the \\\"unstable\\\" performance mentioned in Table 5. Is this instability due to the simplicity of the projector structure or other optimization issues, such as drastic parameter changes? If adding experts to the projector laterally can improve these issues, why not utilize a multi-head approach in the projector? Considering inference costs in MoE and the unclear independence of the visual expert, these questions remain. Moreover, the experiments compare with LLaVA v1.5 NeurIPS 23, but it would be beneficial to compare with more recent works like CogVLM, Wings, and Mono InternVL. The performance improvements in Table 4 are almost all below 0.5%, which seems unconvincing.\", \"questions\": \"Is the training process of hypernetworks similar to the bilevel optimization seen in NAS (Neural Architecture Search)? What are the advantages and characteristics of parameters generated by hypernetworks compared to directly fine-tuned parameters? Could you possibly provide quantitative or visualized results?\\n\\nIn Table 5, the bolded section for MME accuracy seems incorrect. Typically, \\\"LoRA\\\" is the preferred capitalization rather than \\\"LoRa.\\\" Additionally, the use of \\\"L\\\" in Equation 6 may lead to misunderstanding, like \\\"Loss\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The static paradigm in current MLLM, like LLaVA, can introduce potential task interference, ie, the improvements for one task can cause performance degradation on other tasks.\\nThus\\uff0c the paper introduces HyperLLaVA to adjust the parameters of the vision projector and LLM conditioned on instruction semantics.\\n\\nExperimental results are reported on commonly used benchmarks, like VQAv2, GQA, VizWiz, SQA, POPE, MME, MMB, SEED, and LLaVA-bench. Reasonable improvements are achieved.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"(1) The method is simple to implement.\\n(2) Sound experimental results and ablations.\", \"weaknesses\": \"(1) The symbols are not consistent. What are the relationships between $\\\\xi$, $E^{n}()$, and $H_M$ in Eqs (2), (3), (5)?\\n\\n(2) What's the \\\"L1\\\" in line of 250?\\n\\n(3) Why does the last input token $h^{L/2}$ at L/2-th layer can fully perceive the whole multimodal context?\\n\\n(4) The comparison with LLaVA seems unfair because the paper uses an extra 505K CMT training data. \\n For the HyperLLaVA, is LLM fully fine-tuned or trained with LoRA?\\n\\n(5) Indicated by the formulation of $\\\\mathcal{V}$ in line of 246 for vision projector and Eq. (6) for LLM, it seems not to be dynamic but an extra residual output. \\nMoreover, what's the network structure of $\\\\xi$ in Eq. (6)?\", \"questions\": \"(1) The symbols should be defined systematically and consistently.\\n(2) The comparisons with baselines should be fair.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper focus on the problem of the static mapping of existing MLLMs, and propose a design of dynamic vision and language token experts to handle the claimed ``negative transfer'' of MLLMs. However, the reviewer think that the problem of this paper focuses is ill defined, which greatly contradicts the main principle of existing MLLMs. Meanwhile, the experimental results can not well support the argument of this paper as well as the design proposed in this paper.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"Subjectively speaking, given an ill-defined problem, the other efforts made by the authors can hardly be regarded as the merits of this paper.\", \"weaknesses\": \"1. The main problem of this paper is its motivation. Task transfer learning is important for MLLMs, but it is hard to say negative transfer really matters in MLLMs. Above all, the most recognized principle of MLLMs is to use the giant LLM to learn massive multimodal data at the same time, thus generalizing to all downstream tasks. In this case, how can the authors argue that the static parameters are not good for large-scale MLLMs, just depending on the observation of [Want et al. 2019] experimented on the small-scale models and benchmarks.\\n\\nMore theoretical analyses and recent literature review are required to validate the motivation of this paper. \\n\\n2. In addition, the design focus of this paper is also very problematic, i.e., learning dynamic projection for visual and text tokens. The main pricinple of most MLLMs is to project tokens of different modalities onto the same semantic, i.e. ,the one of LLM. Why the authors think the task-specific projection is better than this mainstream paradigm? Moreover, the text tokens have been well trained with the LLM architecture, thus why task-specific text embedding is needed? \\n\\nSimilarly, More theoretical analyses and recent literature review are required to validate this design . \\n\\n3. The experiments do not support the argument of this paper. The overall improvement of the proposed Hyper-LLaVA is very marginal, which can be easily achieved by just increasing the training steps of LLaVA-1.5. Meanwhile, other experiments can also not strongly prove the motivation about the dynamic token design of this paper, as well as the qualitative analyses.\", \"some_detailed_suggestions\": \"1. Provide a direct comparison with LLaVA-1.5 trained for additional steps.\\n2. Include more rigorous statistical analyses to demonstrate significance.\\n3. Design additional experiments that more directly test the benefits of the dynamic token approach.\\n\\nBesides, the complexity and additional steps of the proposed dynamic token projection are better to highlighted.\\n\\nOverall, this paper states an argument against common research paradigm of existing MLLMs, but are not sufficiently supported in terms of either principle, methodology or experimental results.\", \"questions\": \"My concerns and questions are given in the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
Em6GkQfLKM
Exploring Group and Symmetry Principles in Large Language Models
[ "Shima Imani", "Hamid Palangi" ]
Large Language Models (LLMs) have demonstrated impressive performance across a wide range of applications; however, assessing their reasoning capabilities remains a significant challenge. In this paper, we introduce a framework grounded in group and symmetry principles, which have played a crucial role in fields such as physics and mathematics, and offer another way to evaluate their capabilities. While the proposed framework is general, to showcase the benefits of employing these properties, we focus on arithmetic reasoning and investigate the performance of these models on four group properties: closure, identity, inverse, and associativity. Our findings reveal that LLMs studied in this work struggle to preserve group properties across different test regimes. In the closure test, we observe biases towards specific outputs and an abrupt degradation in their performance from $100\%$ to $0\%$ after a specific sequence length. They also perform poorly in the identity test, which represents adding irrelevant information in the context, and show sensitivity when subjected to inverse test, which examines the robustness of the model with respect to negation. In addition, we demonstrate that breaking down problems into smaller steps helps LLMs in the associativity test that we have conducted. To support these tests we have developed a synthetic dataset which will be released.
[ "probing", "robustness", "LLM" ]
Reject
https://openreview.net/pdf?id=Em6GkQfLKM
https://openreview.net/forum?id=Em6GkQfLKM
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z83VnLOg4s", "wzo5FruWf2", "wBXk9g0h48", "YzC0epTKjo", "JhCCUmG4n1", "FZNugrs1Ls", "FM2W2odsS2", "48k14cjHpu", "3kew04geo4" ], "note_type": [ "decision", "official_review", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1737523847681, 1730047851072, 1734706642621, 1732733096830, 1730438313107, 1732628407445, 1732622933321, 1730670702547, 1729858235633 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7564/Reviewer_ssWU" ], [ "ICLR.cc/2025/Conference/Submission7564/Area_Chair_UXv8" ], [ "ICLR.cc/2025/Conference/Submission7564/Reviewer_sGtk" ], [ "ICLR.cc/2025/Conference/Submission7564/Reviewer_sGtk" ], [ "ICLR.cc/2025/Conference/Submission7564/Reviewer_Jgga" ], [ "ICLR.cc/2025/Conference/Submission7564/Reviewer_6eXX" ], [ "ICLR.cc/2025/Conference/Submission7564/Reviewer_Jgga" ], [ "ICLR.cc/2025/Conference/Submission7564/Reviewer_6eXX" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper takes a first-principles approach to the evaluation of language models. The authors create a succinct dataset of prompts comprising addition operations under various perturbations. The purpose of the perturbations is to evaluate whether, and the extent to which, a language model is invariant to such perturbations. The perturbations test closure, identity, inverse, and associativity properties.\\n With this dataset, they report results on GPT 3.5 and 4.0 in the main body of this paper, and results on other models in the appendix. The authors also introduce irrelevant text into the prompts of GSM8K and report results on GPT-4o and Mistral-7B Instruct.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"**Originality**: The first-principles approach of this paper is refreshing. Grounding a language model robustness evaluation dataset in group theoretical invariances is both clever and has the potential to provide insights beyond what is currently known about language models' behaviors.\\n\\n**Quality**: The quality of the writing is overall good. It situates the work with ample and appropriate citations of related literature. \\n\\n**Clarity**: The authors clearly state the purpose and contribution of the paper. The method is described clearly and illustrated nicely in the figure on p. 5.\\n\\n**Significance**: Analyses of this sort have the potential to contribute valuable insights into the workings of language models and how to improve them.\", \"weaknesses\": [\"**Originality**: In general, papers that focus on specific operations -- like this paper, or like [1] -- demonstrate a weakness, provide valuable insights based on that weakness, and possibly explore experimentally ways to overcome the weakness. This paper demonstrates weaknesses and derives little new insights. While the concept of the dataset described in this paper is novel in ways, I'm afraid that the originality of the *insights* provided by the dataset are quite limited. The limitations of language models under conditions that require variable amounts of computation are pretty well known by now. For instance, the insight that language model accuracy on arithmetic tasks is constrained by the length of the (implied) computational graph was precisely defined and thoroughly evaluated in [1]. Their robustness under variation in the position of inputs in the context window have been explored in [2]. And the effectiveness of improving language model accuracy on such tasks by encouraging them to traverse high-probability regions of their hidden states is also well known [3]. I would encourage the authors to consider how group theoretical principles could be explored more thoroughly or systematically to provide insights beyond what has been shown already. For example, some recent work [4] has shown that recurrent networks consistently out-perform transformers as detectors of regular languages. Do similar inequalities exist between network architectures with respect to group theoretical principles?\", \"**Quality**: The scale of the experiments reported here are small (10 runs) number of models evaluated is modest. The results of the dataset created by the authors are reported on GPT 3.5 and 4.0. The ancillary experiment described in the paper was performed on GPT-4o and Mistral-7B-Instruct. While Open AI only released their o1 model on September 12th of this year, the dataset in this paper is so small that running it against o1 could provide insight into a variable-computation model that has not yet been evaluated much (although see [5]). While simplicity is a virtue, the dataset evaluated here could be created in, I estimate, about 4 hours, so the authors could have spent much more time on other things, like evaluating group theoretical principles on more than just addition.\", \"**Clarity**: I think the clarity of the paper is not a weakness. I would encourage the authors in a future version of the paper to put less emphasis on the potential insights that group theoretical principles can bring to language model evaluation and much more on the actual insights.\", \"**Significance**: See my comments above under **Originality**. One point made by the authors is that the sheer smallness of their dataset implies that problems identified by other, larger datasets can be done at lower cost using their dataset. If this is indeed the case, the paper would be improved by showing at a minimum how much less expensive it is. A potentially interesting future line of research is robustness benchmarks with provably minimal cost and provable equivalence (of what is tested and surfaced) with more expensive benchmarks.\", \"[1] Dziri et al, Faith and Fate: Limits of Transformers on Compositionality, NeurIPS 2023\", \"[2] Liu et al, Lost in the Middle: How Language Models Use Long Contexts, TACL 2024\", \"[3] Wei et al, Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, NeurIPS 2022\", \"[4] van der Poel et al, MLRegTest: A Benchmark for the Machine Learning of Regular Languages, JMLR 2024\", \"[5] Valmeekam et al, LLMs Still Can't Plan; Can LRMs? A Preliminary Evaluation of OpenAI's o1 on PlanBench, arXiv:2409.13373\"], \"questions\": \"What insights beyond those in [1,2,3,4] cited above does your benchmark provide? What is the cost difference between using your benchmark and a set of benchmarks that surface the same weaknesses?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper aims to develop a general framework to evaluate the reasoning capabilities of LLMs using group properties. While the topic is timely and important, all four reviewers give low scores due to the following concerns: 1) the contribution is not clear as group properties are frequently used for evaluation in the literature; 2) the presentation is poor that the literature review is insufficient and the main conclusions are not clear; 3) the experiment is limited to very simple scenarios. No rebuttal is provided from the authors. AC agrees this work is still in progress and requires substantial improvement.\", \"additional_comments_on_reviewer_discussion\": \"No rebuttal is submitted, and so reviewers keep their initial ratings.\"}", "{\"comment\": \"Since there was no response from the authors, I stand by my initial review. I would be glad to discuss this work and amend reviews if necessary in lieu of new information provided by the authors.\"}", "{\"summary\": \"This work examines how well LLMs adhere to principle of symmetry and group properties such as associativity and transitivity in their reasoning. These capabilities are tested by way of conducting experiments over four properties: closure, identity, inverse, and translation. These experiments reveal persistent reasoning errors ranging from bias towards certain numbers to a sensitivity to perturbations. This work also correlates these error patterns over the simple arithmetic setting to similar behaviours over more complex tasks. Finally, a synthetic dataset is proposed that contains these adversarial data samples curated to identify model reasoning pitfalls.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"There is definite value to examining the types of reasoning errors by testing independent properties to identify these patterns of behaviour in other tasks.\", \"This work builds on previous literature which demonstrate the susceptibility of models to simple perturbations in input data.\", \"There is some (qualitative) analysis comparing model outcomes on actual mathematical word problems (MWPs).\", \"The theoretical basis is well-researched and results in a useful framework for identifying error patterns beyond simple accuracy measures.\", \"There are some interesting insights re: model bias for certain numbers.\"], \"weaknesses\": [\"The implication of the results and analyses in this work is that if LLMs can pass these property-based tests, they exhibit certain skills associated with an understanding of these properties. eg.\", \"if this framework shows that a model has an understanding of associativity, the model can decompose and solve problems. However, there is no evidence provided for this hypothesis, for example by comparing model accuracy on the associativity test and in parallel on another pre-existing task that requires decomposition.\", \"Similarly, there is a lack of more comprehensive quantitative results based on accuracy metrics that confirm the hypothesis that patterns observed over the simple {1,0} addition task are also observed over more complex tasks such as solving natural language MWPs. This makes it difficult to definitively prove the claim that these tests can offer insights into the type of reasoning errors made by models.\", \"The {1,0} based addition test is arguably too simplistic to gauge overarching error patterns in LLMs. Furthermore, the tests proposed in this work (identity, closure, etc) could be performed on problems of far greater complexity.\", \"Given the insight that models are biased towards numbers such as 0, 50, 100: the inverse test is likely to yield optimistic results, as the correct answer is always 0. It may be more illustrative to perform this test with negations that amount to non-zero results, such that the sum of results over all tests in the experimental setting is zero.\", \"Table 3: there is no baseline performance provided over questions in their native, unperturbed state.\", \"Page 15: The prompt to rate similarity does not definitively define the rubric, and there is potential for ambiguity in interpreting the descriptions associated with the scores 2, 3, and 4.\"], \"questions\": [\"Sections 3.1 and \\\"Experiments on SLM\\\" section of the appendix demonstrate that this framework can be utilized with any set of whole numbers. Why are the main experiments limited to elements from the set {0,1}?\", \"Similarly, in order to more accurately model real world problems, which contain both affirmative and negative information, it may be useful to conduct these addition-based group property evaluations with elements from the group {-1, 0, 1}.\", \"The translation and random swapping tests, specifically in the context of identity preservation, may result in the same input pattern. Is there any significance to or observable insight from the differing results of these tests?\", \"What is the intuition behind the ratios selected for Test 1 and Test 2 of the associativity test (Sec 3.6)? Providing results over a range of ratios may better identify the optimal split.\", \"This work proposes a variant of GSM-8K with irrelevant information in Section 3.7. How does this variant differ from existing adversarial datasets such as GSM-8K (M3) [1], GSM-8K-Adv [2], and GSM-NoOp [3], which are also perturbed variants of GSM-8K?\", \"[1] Xie, R., Huang, C., Wang, J., & Dhingra, B. (2024). Adversarial Math Word Problem Generation.\", \"[2] Anantheswaran, U., Gupta, H., Scaria, K., Verma, S., Baral, C., & Mishra, S. (2024). Cutting Through the Noise: Boosting LLM Performance on Math Word Problems.\", \"[3] Mirzadeh, I., Alizadeh-Vahid, K., Shahrokhi, H., Tuzel, O., Bengio, S., & Farajtabar, M. (2024). GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"As there were no responses from the authors, I stand by my initial review.\"}", "{\"title\": \"No response\", \"comment\": \"Since there was no response from the authors, I stand by my initial review (this response was prompted by the review system).\"}", "{\"summary\": \"In this paper, the authors develop a framework for language model evaluation based on group properties and demonstrate its application to addition. Their evaluation framework centers on four key properties: closure, identity, association, and inverse. The authors reveal that GPT-3.5 and GPT-4 fail to maintain consistent performance across closure, identity, and association tests. Moreover, they show that leveraging the associative property of addition\\u2014breaking down large addition problems into smaller, more manageable pieces\\u2014improves the performance of these language models.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The symmetries and structure inherently contained in a problem can often give us a lot of information about it. I agree with the authors that this is a fundamental idea which has already permeated many areas of science. Recently, we have already seen them being applied to machine learning through many facets (e.g. geometric deep learning, neurosymbolic AI, and more recently, interpretability). Thus, the topic of the paper is timely and relevant to the field.\", \"weaknesses\": \"- The authors strongly emphasize that the contribution of the paper is *not* their benchmarks on arithmetic tasks but rather a general benchmarking framework. In this case, the contribution of the paper is unclear to me. Based on my understanding of the literature, it seems that the field is already aware of using group properties/structures to improve/evaluate model performance (see [1,2,3,4,5] and much more\\u2026). To this end, many benchmarks like the authors mention themselves already incorporate these elements into their evaluation process [6].\\n- Given this example of arithmetic, it is unclear to me how this framework would generalize to other more complicated tasks that the community is interested in: one where the symmetries exist, but it is not at all obvious how to design benchmarks based on these symmetries. Consider examples like text-to-image, image-to-text, hallucination detection, alignment, entity resolution, etc.\\n- As is, the paper is very poorly written. It is hard to follow what the main claims, the experimental settings, and what even the main results are.\\n- The related literature section is also nonexistent/weak. I feel that the related literature should answer at least the following two questions:\\n - How is this framework similar/different to other evaluation frameworks that already exist in the literature?\\n - Are the experiments/results that we get from this framework consistent with what we see in the related literature (on this arithmetic task). Or even better, do the experiments/results generalize or explain some phenomena that the community does not understand?\\n\\nI strongly believe this work is still in progress and would greatly benefit from substantial revisions and additional rounds of review beyond the scope of the ICLR review cycle. \\n\\n[1] Godfrey, Charles, et al. \\\"On the symmetries of deep learning models and their internal representations.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a035 (2022): 11893-11905.\\n\\n[2] Liu, Ziming, and Max Tegmark. \\\"Machine learning hidden symmetries.\\\"\\u00a0*Physical Review Letters*\\u00a0128.18 (2022): 180201.\\n\\n[3] Winter, Robin, et al. \\\"Unsupervised learning of group invariant and equivariant representations.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a035 (2022): 31942-31956.\\n\\n[4] Crabb\\u00e9, Jonathan, and Mihaela van der Schaar. \\\"Evaluating the robustness of interpretability methods through explanation invariance and equivariance.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a036 (2023): 71393-71429.\\n\\n[5] Wang, Yipei, and Xiaoqian Wang. \\\"Self-interpretable model with transformation equivariant interpretation.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a034 (2021): 2359-2372.\\n\\n[6] Srivastava, Aarohi, et al. \\\"Beyond the imitation game: Quantifying and extrapolating the capabilities of language models.\\\"\\u00a0*arXiv preprint arXiv:2206.04615*\\u00a0(2022).\", \"questions\": [\"In the abstract, the authors mention \\u201cassessing [language models\\u2019] reasoning capabilities remain a significant challenge. I am unaware of such challenges and the authors do not elaborate on this point in the main text further? Can you give some examples from the literature where people have shown this to be the case?\", \"Why do the authors call the closure test the closure test? It seems that the closure property ($a, b \\\\in G \\\\Rightarrow a\\\\circ b \\\\in G$) is not being tested, but rather the abelian property of the group (lines 231-237).\", \"The closure test described in lines 231-237 seems different from what the authors describe in lines 281-295. And, after reading this latter section, I no longer understand what the goal of this test is. Can the authors elaborate on this?\", \"Lines 441-442, for the associativity test, the authors choose only ratios (3/8, 5/8) and (1/4,3/4). Why were these specific numbers chosen and are there ablations over the \\u201cpivot\\u201d point. Did the authors test instead of splitting the problem into two distinct parts, splitting it into multiple (i.e. greater than 2)?\", \"Lines 455, the authors claim to that LLMs fail to preserve associativity beyond a certain point. It seems that there are multiple confounding variables here. For example, if you have length 120 then a (3/8,5/8) split corresponds to lengths (45, 75). Of these two splits, the longer one is essentially the closure test. So, why is the closure test not a subset of the associativity test?\", \"Based on the previous question, I guess I don\\u2019t understand what the associativity test is testing. Because you are breaking the problem down for the language model, rather than the language model recognizing that there are recursive subproblems here. So even if you demonstrate that an LLM perfectly solves this associativity test, what would you be able to say about how its generalizing?\", \"The authors extend their results from addition problems to the GSM8K dataset. They add irrelevant information to the prompts using GPT. How did you verify that the information being added was indeed irrelevant? I do not see any information on this in the appendix.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors present different principles of symmetry and explain how those principles can be used to analyze the shortcomings of Large Language Models (LLM). The authors then provide experiments across the different principles to show how and where these LLMs fail, specifically looking at how these models perform under larger contexts over varying symmetries. All experiments are conducted on varying chains of addition between -1, 0, 1.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The authors use the known approach of analyzing symmetry within a system to probe LLMs and discover their shortcomings.\", \"weaknesses\": \"Main concerns:\\n1. The presentation of the results is extremely poor. Do not use heat maps to present accuracy. Typically you want to use line graphs for experiments that have are varying one parameter many times. Try using line graphs or bar graphs for all of the figures that are heat maps.\\n2. In the experimental sections, each of the results are shortly discussed in regard to how they show that the LLMs are failing. It would be useful to also discuss how these insights can lead us to build better LLMs.\\n3. The experimental setting of using only -1, 0, 1 makes these experiments feel like toy experiments. Using sentences and paragraphs, natural language, would be much more interesting and significant.\\n4. Excessive use of bullet points. Write out full paragraphs.\\n5. From this paper, it is still unclear how closure and associativity are different.\\n6. The introduction and background span 4 full pages. For a paper such as this, you could probably cut down 1 or 2 pages of content from those 4 pages so that there is more room to provide experiments and thoroughly discuss results.\", \"writing_comments\": \"Page 1, Line 41: Group and symmetry principles which made --> Group and symmetry principles have made\\nPage 2, Line 60 - 102: These do not need to be bullet points. The paper would be more aesthetically pleasing if the bullet points were removed and the topic markers were instead bolded. Also, Page 2, Line 58 should probably be changed to mention each of the topics instead of ending on colon.\\nPage 3, Line 113 - 126: The break from bullet points and figure insertion looks odd. Keep the text with the previous bullet point and move the figure to the top of the page.\\nPage 4 - 5, Line 198 -222: Again, the topic markers for each bullet point can just be bolded and the bullet points can be removed.\\nPage 3 & 5: Should the figures presented on each page not have a caption and figure number?\\nPage 5, Line 269: one, zero, and negative one --> 1, 0, -1\\nPage 8, Line 407 - 408: Having 1.5 lines between two almost full pages figures is poor presentation.\", \"questions\": \"The methods are mostly clear. It is mostly a presentation and experimental scope issue. See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"n/a\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
ElYRG3pJcv
Optimizing Inference-Time Reasoning in LLMs via Retrieval-Augmented Reflection
[ "Zihao Wang", "Haowei Lin", "Ruilin Yan", "Xiangyu Wang", "Jiaqi Li", "Weiye Shi", "weipeng chen", "Xiaojian Ma", "Anji Liu", "Yitao Liang" ]
Empowering LLMs to improve their performance through increased inference-time computation is a crucial step in developing self-improving agents capable of operating in open-ended natural language contexts. In this paper, we explore how iteratively revising a chain of thoughts guided by information retrieval significantly improves large language models' reasoning ability in challenging tasks, while hugely mitigating hallucination. In particular, the proposed method --- \emph{retrieval-augmented reflection} (RaR) --- revises the generation tokens step by step, leveraging multiple pieces of retrieved information relevant to the intermediate reasoning steps and the instruction. Applying RaR during inference-time to a various set of language models substantially improves their performances on various reasoning tasks; on relatively increasing scores by up to +16.4\% on code generation, +11.6\% on mathematical reasoning, and 29.1\% on embodied task planning. Moreover, we find that with more inference-time computation given to the LLM for multi-times retrieval-augmented reflection, the LLM can continuously improve on various reasoning benchmarks. A small LM can surpass the performance of the LM with more than 10 times the parameters, when giving more computation cost.
[ "Retrieval-augmented Generation", "Reasoning", "Large Language Models" ]
Reject
https://openreview.net/pdf?id=ElYRG3pJcv
https://openreview.net/forum?id=ElYRG3pJcv
ICLR.cc/2025/Conference
2025
{ "note_id": [ "nBnDrwqhur", "lLgZGunK8c", "jcsw4tdwIP", "gyYEllmB04", "fn3Ihu6OoN", "QqFRkNMIts", "QRuTbgI0hd", "AmyHCFaD77", "2myjyhKQTq" ], "note_type": [ "official_review", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "meta_review" ], "note_created": [ 1730285105950, 1737524042364, 1732766338534, 1730665113364, 1732369356692, 1732369643574, 1729068405229, 1730086309319, 1734918339724 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10331/Reviewer_yr3i" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10331/Reviewer_yr3i" ], [ "ICLR.cc/2025/Conference/Submission10331/Reviewer_XHx9" ], [ "ICLR.cc/2025/Conference/Submission10331/Authors" ], [ "ICLR.cc/2025/Conference/Submission10331/Authors" ], [ "ICLR.cc/2025/Conference/Submission10331/Reviewer_UX3s" ], [ "ICLR.cc/2025/Conference/Submission10331/Reviewer_Pozp" ], [ "ICLR.cc/2025/Conference/Submission10331/Area_Chair_u2dg" ] ], "structured_content_str": [ "{\"summary\": \"Many research efforts focus on correcting misinformation generated by language models through retrieval augmentation. This paper explores how iteratively revising a chain of thoughts with the help of information retrieval significantly improves large language models' reasoning abilities across various tasks, such as code generation, mathematical reasoning, and embodied planning. Experimental results show that, compared to methods like language model self-consistency and simple retrieval augmentation, the proposed RaR framework is more effective in mitigating hallucinations in the content generated by models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper explores iteratively revising a chain of thoughts with the help of information retrieval to improve large language models' reasoning abilities. Its strengths include:\\n\\n1. The writing in the paper is clear, and the figures are intuitive, effectively conveying the main ideas and supporting the overall arguments.\\n\\n2. The method proposed by this paper incorporates a recursive correction mechanism by revising each thought step based on previously refined steps, allowing for continual consultation of relevant information. This significantly improves the accuracy and reliability of generated outputs compared to traditional RAG methods.\\n\\n3. RaR is flexible in handling various tasks, such as code generation, mathematical reasoning, and embodied planning.\", \"weaknesses\": \"1. The baseline methods compared in Table1 are weak, and the proposed approach has not been compared with the latest related RAG research, which does not demonstrate the significant differences between RaR and existing work (Active-RAG, IRCoT, ...).\", \"questions\": \"1. Is the number of modification iterations in RaR related to the number of sentences in the answers? In that case, when comparing with RAG, is the number of retrievable instances in RAG consistent with that of RaR?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your feedback. After considering the insights shared by other reviewers, I have decided to retain the current rating score.\"}", "{\"summary\": \"This paper studies the problem of leveraging RaG to improve LLMs\\u2019 reasoning capabilities, and proposes a new approach called Retrieval-augmented Reflection (RaR). RaR first generates an initial solution, and then alternates the process retrieval and revision to revise each step in the initial solutions. The paper evaluates the proposed approach on three domains and compares against a series of zero-shot baselines. The results suggest that the method can improve LLM performance on code generation and embodied planning tasks.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The results presented in the paper suggest good empirical performance.\\n\\nThe paper evaluates the method on multiple datasets spanning several domains.\", \"weaknesses\": \"I have several concerns, mainly regarding the evaluation. The comparison is probably unfair, which overstates the performance gains. Additionally, insufficient detail about the evaluation setup makes it difficult to validate the results. Specifically:\\n\\nFirst, the comparison is unfair. The paper compares RaR against direct/CoT/self-consistency/RAG approaches. Among these, Direct/Cot and RAG are approaches that produce answers with one LLM call. In contrast, RaR, involves revising each reasoning step. IIUC, RaR would involve T LLM calls for producing an answer, where T is the number of steps, plus the additional costs of processing retrieved documents. To ensure fair comparison and to fully understand the differences of different approaches in terms of computation cost, I think it is necessary to 1) list out the computation cost needed for each approach (in terms of number of tokens) 2) compare methods under the same computation constraints.\\n\\nSecond, the missing details on the experimental setup makes it harder to interpret these results. Below are some concrete points:\\n* Table 1: Are all methods using the same computation budget? For self-consistency, how many samples are considered? For self-refine, how many rounds of evaluation are made?\\n* Table 2 (math reasoning and task planning sections): Which models are employed?\\n* Table 2 (QA section): The table structure is unclear. Different rows appear to use different base models (e.g., Reasoning row vs. RAG row), and the base models for the fourth RAG row are unspecified.\\n\\nIn particular, given the lack of description of experimental details. I also find some numbers seemingly need further verification.\\nIn Table 1, GPT-4's reported direct performance on HumanEval and MBPP (57.3% and 60.0%) differs from the GPT-4 technical report (67.6% and 68.3% for zero-shot, pass@1).\\n* For instance, in Table 1, it is reported GPT-4 gets direct performance of 57.3% and 60.0 on human-eval and MBPP, respectively. But some other work, including GPT-4 tech report, and AgentCoder (Huang et al., 24), reports that GPT-4 gets a performance of 67.6 and 68.3 (zero-shot, pass@one) on HumanEval and MBPP respectively. It is unclear what leads to such mismatch.\\n* Table 2's math reasoning results show CoT significantly underperforming DIRECT, which seems counterintuitive.\\n\\nThe paper needs to provide comprehensive experimental details and restructure Tables 1 and 2 for clarity and accuracy.\\n\\nAdditionally, the paper lacks sufficient analysis of its results, especially for the qualitative outcomes on mathematical tasks. According to the results, the approach shows significant performance improvements in code generation and mathematical reasoning. But the underlying reasons for these improvements are not sufficiently explained. For code-related tasks, the improvements are somewhat intuitive as there often exist multiple code implementations with similar semantics. For mathematical reasoning tasks, which are typically more problem-specific, it would be valuable to understand what drives such improvements. The paper does not provide analysis in the main body nor in the appendices to explain these performance gains. It would be good to provide some analysis around this.\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> The writing of method part in the paper (Line 216~Line 239) is somewhat vague. While I can understand the general idea of the proposed method, I cannot accurately grasp the specific details through the formulas and descriptions, and clarification from the authors is needed.\\n\\n\\nThank you for your reply. We have rewritten the method section of RaR based on your feedback. In Section 3.1, we first introduce how the standard Retrieval-Augmented Reflection is implemented and the differences between RaR and Retrieval-augmented Generation (RAG). In Section 3.2, we demonstrate how Iterative RaR iteratively updates intermediate reasoning steps and the final response. In Section 3.3, we show how RaR and other baseline methods scale when given more inference-time tokens. We have also updated Algorithm 1 to make it easier for readers to understand the RaR pipeline.\\n\\nIf you have any further questions about the method, feel free to discuss them with me.\\n\\n> The author claims that by expanding n (the number of reasoning steps) to achieve Inference Scaling. However, this dimension seems not easy to expand. On one hand, the number of reasoning steps generated by Zeroshot CoT is not entirely controllable. On the other hand, for a specific problem, the number of reasoning steps is clearly limited. These two aspects make it less feasible and scalable to increase the number of reasoning steps in order to increase the number of verification rounds.\\n\\nThank you for your reply. Iterative RaR does not explicitly increase the number of CoT steps to expand n. During the reflect and revise process, Iterative RaR focuses on: 1. the accuracy of reasoning steps and 2. the consistency of the final response. When generating retrieval queries, we let RaR first focus on the reasoning steps, usually m steps correspond to m times of RaR. Finally, when m times of RaR still do not reach the maximum token limitation, we will further perform RaR on the complete results to continue scaling. The specific RaR process can be referenced in our updated Algorithm 1 and Equations (5) and (6), which detail the formulation during the RaR scaling process.\\n\\nTherefore, RaR does not need to control the generation steps of CoT. Sorry for the confusion, we have revised the paper according to your comments.\\n\\n\\n> The QA dataset used in this paper is TriviaQA, which only requires single-hop retrieval and reasoning to complete. Given the author's motivation to use retrieval to improve multi-step reasoning, using multi-hop reasoning datasets such as MuSiQue, 2WikiMHQA, etc., seems to be a more appropriate choice.\\n\\nThank you for your comment. Based on your feedback, we have added the 2WikiMultiHopQA and Musique benchmark. All methods use `gpt-4o-mini` and are tested under a maximum of 4K tokens.\\n\\n| Method | DIRECT | RAG | Active RAG | IRCoT | RaR(4K) | RaR(8K) |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| 2WikiMultiHopQA Answer F1 | 50.3 | 63.7 | 67.9 | 68.4 | 77.6 | 79.4 |\\n| Musique Answer F1 | 41.9 | 44.6 | - | 56.5 | 63.4 | - |\\n\\nDue to time constraints, we have currently only completed these baselines (RAG, Active RAG, IRCoT) and RaR (under 4K maximum tokens) and RaR (under 8K maximum tokens). Under the same token limitation, iterative RaR has shown better performance compared to other baselines, with a +21.8% relative improvement on Answer F1 score on 2WikiMultihopQA compared to RAG. When RaR is under more tokens (8k), the performance further improves (79.4 vs. 77.6 on Answer F1 score).\\n\\nWe will add this experiment to the main text of the paper once all the baselines are completed.\"}", "{\"comment\": \"> 1. In the paragraph from Line 220 to Line 224, the description and the formula on Line 224 seem to be incorrect or unclear. It is difficult to understand what the formula is intended to express.\\n\\nSorry for the confusion.\\n\\nThis formula is to illustrate how RaR iteratively updates reasoning steps. To make the expression clearer, we have updated section 3.2 of the method. The updated formula is:\\n\\n$y_{i}^{\\\\text{RaR}} \\\\sim p_\\\\text{LM}(\\\\cdot \\\\mid x, y_{i}^{\\\\text{thought}}, V_i^k, y_i^\\\\text{reflection}), i = 1$\\n\\n$y_{i}^{\\\\text{RaR}} \\\\sim p_\\\\text{LM}(\\\\cdot \\\\mid x, y^{\\\\text{RaR}}_{i-1}, y_i^{\\\\text{thought}}, V_i^k, y_i^\\\\text{reflection}),1 < i < J,$\\n\\n$y_{i}^{\\\\text{RaR}} \\\\sim p_\\\\text{LM}(\\\\cdot \\\\mid x, y_{i-1}^{\\\\text{RaR}}, y_{i}^{\\\\text{thought}}, y^{\\\\text{raw}}, V_i^k, y_i^\\\\text{reflection}), i = J. $\\n\\nIn this equation, $y^{RaR}_i$ represents the RaR response during the i-th iteration process, $y_i^{thought}$ represents the i-th step of reasoning, $V_i^k$ represents the document most relevant to the current i-th step, and $y_i^{reflection}$ represents the reflection of the LLM based on the content of the retrieval for the i-th step.\\n\\nWe have updated the Method 3.2 Section and Algorithm 1 part of the paper based on your feedback.\\n\\n> 2. What are the differences between the two sets of formulas on Line 219, 224, and Line 233~235?\\n\\nSorry for the confusion. Line 219 and 224 show the formulation of the query and RaR response, while in Line 233-235 we demonstrate how iterative RaR performs retrieval and reflection.\\n\\nBased on your feedback, we have updated these formulas in Section 3.2 of the paper. In the updated paper, the query for the i-th iteration of iterative RaR is shown in Equation (4):\\n\\n$q^i \\\\sim p_{\\\\text{LM}}(\\\\cdot \\\\mid x^*, \\\\\\\\{y_j^\\\\text{thought}\\\\\\\\}_{j=1}^{j<=i}), i=1,\\\\ldots,J.$\\n\\nAmong them, J represents the number of reasoning steps, $x^*$ represents the instruction, and $y_j^{thought}$ represents the content of the j-th reasoning step.\\n\\nThe reflected response of iterative RaR in the i-th iteration (Equation (5)) is:\\n\\n$y_{i}^{\\\\text{RaR}} \\\\sim p_\\\\text{LM}(\\\\cdot \\\\mid x, y_{i}^{\\\\text{thought}}, V_i^k, y_i^\\\\text{reflection}), i = 1$\\n\\n$y_{i}^{\\\\text{RaR}} \\\\sim p_\\\\text{LM}(\\\\cdot \\\\mid x, y^{\\\\text{RaR}}_{i-1}, y_i^{\\\\text{thought}}, V_i^k, y_i^\\\\text{reflection}),1 < i < J,$\\n\\n$y_{i}^{\\\\text{RaR}} \\\\sim p_\\\\text{LM}(\\\\cdot \\\\mid x, y_{i-1}^{\\\\text{RaR}}, y_{i}^{\\\\text{thought}}, y^{\\\\text{raw}}, V_i^k, y_i^\\\\text{reflection}), i = J. $\\n\\nIn this equation, $y^{RaR}_i$ represents the RaR response during the i-th iteration process, $y_i^{thought}$ represents the i-th step of reasoning, $V_i^k$ represents the document most relevant to the current i-th step, and $y_i^{reflection}$ represents the reflection of the LLM based on the content of the retrieval for the i-th step.\\nThanks for your comments.\\n\\n\\n> 3. The mathematical reasoning in the article uses simple arithmetic reasoning, with the GSM8K and GSM-Hard datasets. Unlike MATH, which involves some advanced mathematical knowledge, these two datasets consist of simple multi-step arithmetic operations and basic linear equations, without involving external knowledge. Secondly, the retrieval for mathematical reasoning uses Jupyter code corpus, which does not seem to be helpful for solving mathematical reasoning problems.\\n\\nThank you for your comments. Based on your feedback, we have added the experimental results of RaR on the MATH dataset. In the MATH benchmark, we uniformly used the Deepseek-math-7B model and used the PRM800k dataset as the retrieval documents library, with a maximum token limitation of 4K tokens.\", \"the_experimental_results_are_as_follows\": \"| Method | DIRECT | RAG | IRCoT | RaR |\\n| --- | --- | --- | --- | --- |\\n| Accuracy | 58.8 | 60.2 | 64.5 | 69.3 |\\n\\nUnder the same model and RAG settings, RaR demonstrated a +15.1% relative improvement. We will further increase more experiments to demonstrate the robustness and scalability of RaR.\"}", "{\"summary\": \"This paper proposes RaR (Retrieval-augmented Reflection), which introduces retrieval-augmented reflection in multi-step reasoning, combining relevant information from external retrieval to correct the intermediate reasoning steps, thereby improving the model's reasoning performance.\\n\\nSpecifically, the proposed method first uses Zeroshot CoT to generate a step-by-step reasoning trajectory, in which may contain erroneous parts.\\nSubsequently, the paper attempts to improve the reasoning by retrieving relevant information from an external knowledge base through a retrieval-augmented approach, and using this information to correct any potential errors in the reasoning.\\nChain-of-thought reasoning is step-by-step, and to achieve more fine-grained and precise corrections, this paper proposes to correct the reasoning steps one by one from the beginning to the end.\\nAt each step of reasoning verification, relevant external information needs to be retrieved. Assuming that the current step i needs to be corrected, the question and reasoning steps 1 through i are used as the retrieval query to retrieve relevant materials, and the retrieved content is used to correct the reasoning.\\nThe paper conducted experiments on tasks such as code generation, mathematical reasoning, and knowledge-intensive question answering.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper proposes a reasoning framework that combines retrieval-augmented and reflection, called RaR (Retrieval-augmented Reflection), which can correct potential errors in the reasoning process based on external knowledge.\\n2. The paper proposes a multi-round reflection method, where each round only retrieves and reflects on the current step, thereby more accurately identifying the location of errors. On another dimension, this approach can increase computational FLOPs during reasoning, allowing for inference-time scaling based on the inference scaling laws.\\n3. The paper conducted experiments on several datasets, including tasks such as code generation, mathematical reasoning, and question answering, demonstrating the effectiveness of the method.\", \"weaknesses\": \"1. The writing of method part in the paper (Line 216~Line 239) is somewhat vague. While I can understand the general idea of the proposed method, I cannot accurately grasp the specific details through the formulas and descriptions, and clarification from the authors is needed.\\n2. The author claims that by expanding n (the number of reasoning steps) to achieve Inference Scaling. However, this dimension seems not easy to expand. On one hand, the number of reasoning steps generated by Zeroshot CoT is not entirely controllable. On the other hand, for a specific problem, the number of reasoning steps is clearly limited. These two aspects make it less feasible and scalable to increase the number of reasoning steps in order to increase the number of verification rounds.\\n3. The QA dataset used in this paper is TriviaQA, which only requires single-hop retrieval and reasoning to complete. Given the author's motivation to use retrieval to improve multi-step reasoning, using multi-hop reasoning datasets such as MuSiQue, 2WikiMHQA, etc., seems to be a more appropriate choice.\", \"questions\": \"1. In the paragraph from Line 220 to Line 224, the description and the formula on Line 224 seem to be incorrect or unclear. It is difficult to understand what the formula is intended to express.\\n2. What are the differences between the two sets of formulas on Line 219, 224, and Line 233~235?\\n3. The mathematical reasoning in the article uses simple arithmetic reasoning, with the GSM8K and GSM-Hard datasets. Unlike MATH, which involves some advanced mathematical knowledge, these two datasets consist of simple multi-step arithmetic operations and basic linear equations, without involving external knowledge. Secondly, the retrieval for mathematical reasoning uses Jupyter code corpus, which does not seem to be helpful for solving mathematical reasoning problems.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper uses LLMs to generate chains-of-thought (CoT) for coding, math, and planning tasks, then uses RAG to iteratively improve each individual step in the original CoT multiple times to improve performance at the cost of extra compute and time during inference. Their method outperforms many other baselines (including RAG, CoT, Self-Refine, and others) and also shows performance increases as models scale in parameters and more iterations of refinements are done. Finally, they show that refining a CoT with a single prompt in one generation is not as powerful as their method, where you fix each individual step of the CoT one by one.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Strong results across many commonly used baselines showing their method (RaR) outperforms all of them.\\n2. Timely and well-motivated, I think scaling during inference is probably the easiest way to get better outputs from closed-source LLMs right now.\", \"weaknesses\": \"1. Lack of an important baseline, RaG + CoT. There already are a ton of baselines here, but I do feel that RAG + CoT is a very close baseline to the method proposed and is a very common one that I think people would want to see before using RaR. I think this baseline would also highlight (similarly to CoT+RAG in Table 3) that even with all the documents/passages required to answer a question, the LMs still benefit from iterative refinement. This would be a big win for the paper and make it extremely clear to other researchers why RaR should be used.\\n\\n2. More thorough analysis of why the baselines are failing. Tables 3 and 4 are good ablations showing the need for iteratively refining each individual step without showing the full CoT, but I think the paper would also benefit from detailing where traditional methods are failing. For example, if we used CoT+RAG or RAG+CoT with all the same documents retrieved in RAR, would we get similar performances? I think more discussion on where RaR is outperforming the other baselines would help researchers understand why RaR is an effective method and where it can be improved for future work.\", \"questions\": [\"Have you tried matching the amount of compute for RAR with other baselines like Self-refine?\", \"Have you tried a baseline similar to RAR but without RAG? (Just asking the LLM to verify/fix steps with no additional context to establish how important the retrieved documents are)\", \"\\\"With lower inference-time computation (FLOPs), a small LM can surpass the performance of the LM with more than 10 times parameters.\\\" I wasn't sure what this meant in the abstract. Are you saying a small LM can outperform a larger one when there is a large compute budget and RAR is used?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper introduces Retrieval-Augmented Reflection (RaR), a framework for improving reasoning in LLMs by iteratively revising reasoning steps using external retrieval. While the method demonstrates potential in enhancing code generation, mathematical reasoning, and task planning, the submission contains substantial weaknesses. The evaluation lacks strong baselines and comprehensive comparisons, raising concerns about fairness. Details about experimental settings and results are insufficient, with discrepancies in reported performances. Methodological clarity is also lacking, particularly in the description of key components like stepwise reflection and scaling with inference-time computation.\\n\\nStrengths include the focus on improving inference-time reasoning and promising empirical results. However, these are undermined by poor experimental rigor, weak baselines, and limited novelty.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers highlighted concerns about evaluation fairness, incomplete baselines, and unclear experimental settings. While the authors attempted to address these through additional experiments and clarifications, key issues remain unresolved. The lack of transparency and discrepancies in performance reporting led reviewers to question the validity of the results. Despite some improvements in clarity and methodology, the submission\\u2019s limitations justify a rejection, with encouragement for substantial revisions.\"}" ] }
ElDpb1BWE3
Compositional Generative Multiphysics and Multi-component Simulation
[ "Tao Zhang", "Zhenhai Liu", "Feipeng Qi", "Yongjun Jiao", "Tailin Wu" ]
Multiphysics simulation, which models the interactions between multiple physical processes, and multi-component simulation of complex structures are critical in fields like nuclear and aerospace engineering. Previous studies often rely on numerical solvers or machine learning-based surrogate models to solve or accelerate these simulations. However, multiphysics simulations typically require integrating multiple specialized solvers—each responsible for evolving a specific physical process—into a coupled program, which introduces significant development challenges. Furthermore, no universal algorithm exists for multi-component simulations, which adds to the complexity. Here we propose compositional Multiphysics and Multi-component Simulation with Diffusion models (MultiSimDiff) to overcome these challenges. During diffusion-based training, MultiSimDiff learns energy functions modeling the conditional probability of one physical process/component conditioned on other processes/components. In inference, MultiSimDiff generates coupled multiphysics solutions and multi-component structures by sampling from the joint probability distribution, achieved by composing the learned energy functions in a structured way. We test our method in three tasks. In the reaction-diffusion and nuclear thermal coupling problems, MultiSimDiff successfully predicts the coupling solution using decoupled data, while the surrogate model fails in the more complex second problem. For the thermal and mechanical analysis of the prismatic fuel element, MultiSimDiff trained for single component prediction accurately predicts a larger structure with 64 components, reducing the relative error by 40.3% compared to the surrogate model.
[ "multiphysics", "multi-component", "PDE simulation", "physical simulation", "generative" ]
Reject
https://openreview.net/pdf?id=ElDpb1BWE3
https://openreview.net/forum?id=ElDpb1BWE3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ysYyOiYUUV", "yckiQRA2yW", "xFaEzgdJYd", "wkyXOb0VE6", "twYpCwydxS", "tNk5fmdEX9", "qaq20CtzsW", "qHgsN6H7IN", "orzTHuIGmL", "ohS0bsNQpo", "ng94IMTCUE", "n27k794NPL", "miHQd7yp4J", "mBBBfgkEVM", "jqZbtAAx28", "ioV4GpFmQq", "i8NCQTFdLs", "fqKHwxMuLA", "fowy0Sgkv8", "cohGMm6rWZ", "bCg5lVYTOc", "aaM3q2xaMa", "ZUY2yIkQkL", "U1rCV5mAKA", "TnqArsdJjr", "RYDatuWsN4", "PWJlr35BYj", "Lz1uVWKdNr", "LXEKzaM4Ys", "LCVhfJkm94", "K30xC6YEEF", "JdqnkqvvmM", "H4JvwQI4Pp", "Gz65DLvfWB", "FwNuL16ILq", "Edvnr8sLtO", "C4FYoIsl1y", "BmIzimrHAj", "AomMsaLiYJ", "ARLu1andVP", "8FmMYSSEWf", "7pgVBJiUaG", "7EaueUDNyk", "72nkNxDgBb", "5qa8XOCgjT", "4NHjDuPg9h" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732614660696, 1732260244157, 1732614594828, 1732260852278, 1732260407553, 1732260627503, 1732260708743, 1732507029783, 1732614636042, 1732260133299, 1732506922411, 1732260547470, 1732259566985, 1733123749247, 1730706619160, 1732260675801, 1732606656894, 1734777432542, 1732259954152, 1732506967969, 1733082546375, 1732507144619, 1733104362817, 1732614611018, 1732260200852, 1730688702513, 1732259993199, 1732260828683, 1733192209756, 1730501956954, 1732614647341, 1732614625898, 1733138136510, 1733191544049, 1732507062100, 1730668628352, 1730310531673, 1732259632892, 1732259689813, 1737523640578, 1733103392357, 1732507119118, 1729876400681, 1733099180785, 1733192249521, 1732261370508 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4447/Area_Chair_N5rq" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Area_Chair_N5rq" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Area_Chair_N5rq" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Reviewer_xu7A" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Area_Chair_N5rq" ], [ "ICLR.cc/2025/Conference/Submission4447/Area_Chair_N5rq" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Reviewer_Fscm" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Reviewer_9cxV" ], [ "ICLR.cc/2025/Conference/Submission4447/Area_Chair_N5rq" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Reviewer_9cxV" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Reviewer_Fscm" ], [ "ICLR.cc/2025/Conference/Submission4447/Area_Chair_N5rq" ], [ "ICLR.cc/2025/Conference/Submission4447/Area_Chair_N5rq" ], [ "ICLR.cc/2025/Conference/Submission4447/Reviewer_HRRS" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Reviewer_HRRS" ], [ "ICLR.cc/2025/Conference/Submission4447/Reviewer_1Pcu" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Reviewer_6eU9" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ], [ "ICLR.cc/2025/Conference/Submission4447/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear reviewer,\\n\\nPlease make to sure to read, at least acknowledge, and possibly further discuss the authors' responses to your comments. Update or maintain your score as you see fit.\\n\\nThe AC.\"}", "{\"title\": \"Official Response to Reviewer 9cxV (5)\", \"comment\": \">Re7: As your mention, your method seems computationally intensive compared to a FNO for example, not just because of the denoising steps but also because of the loop over the physical fields. Do you have a rough estimation of how it compares in terms of execution time and in terms of FLOPs?\", \"answer\": \"Thank you for pointing out this issue. We are sorry that we didn't provide a clear definition before, but now we have updated it in the form of a footnote 1 in the manuscript. In this manuscript, \\\"outer inputs\\\" refers to the inputs of the physical system, physical fields are what we need to solve. For example, in experiment 1, they are the initial conditions; in experiment 2, they are the variations of boundary neutron flux density over time; and in experiment 3, they are the heat flux density.\"}", "{\"comment\": \"Dear reviewer,\\n\\nPlease make to sure to read, at least acknowledge, and possibly further discuss the authors' responses to your comments. Update or maintain your score as you see fit.\\n\\nThe AC.\"}", "{\"title\": \"Official Response to Reviewer 6eU9 (2)\", \"comment\": \"For reference [1], this article employs the JFNK method to solve large systems of equations, which is a full-coupling strategy and a numerical technique for addressing multiphysics coupling problems. However, the full-coupling approach is not suitable for all scenarios, particularly when two physical processes require different numerical methods for their solutions. In addition, the cost of JFNK for solving complex problems is high.\\n\\nFor reference [2], as mentioned in the manuscript, this article establishes a surrogate model for the temperature field and then integrates it with other CFD solvers to accelerate multiphysics simulation. However the issue discussed is only a unidirectional coupling problem, while bidirectional coupling may introduce additional challenges. In contrast, our algorithm is not limited in this regard, and Experiment 2 encompasses a variety of scenarios that multiphysics simulation might encounter.\\n\\nFor reference [3,4,5,6], these articles are actually focused on the quantification of uncertainty in multiple numerical programs, which is entirely different from our work. The motivation is that numerical programs may have inaccurate internal parameters in new scenarios. To address this issue, the article calibrates the internal parameters of the program with new experimental data to ensure that numerical predictions match observed data.\\n\\nAs you mentioned, researcher Karen Willcox from UT-Austin is a highly influential scholar in her field. Her research focuses on model reduction, multifidelity methods, digital twins, and uncertainty quantification. The theme closest to this article is model reduction, which can also be understood as the use of surrogate models. In our manuscript, we use the surrogate model as a baseline for comparison and our method demonstrates superior performance. Researcher Youssef Marzouk from MIT is also a highly influential scholar in his field. His research focuses on uncertainty quantification, inverse problems, and Bayesian statistics. He has done extensive work using Bayesian methods in inverse problems and uncertainty analysis. In the field of multiphysics, he uses Bayesian methods to identify the most critical factors in coupled multi-physics systems. However, our research is actually concerned with **forward** simulation problems.\\n\\nWe hope the explanation above has answered all of the questions.\\n\\n[1] Knoll, Dana A., and David E. Keyes. \\\"Jacobian-free Newton\\u2013Krylov methods: a survey of approaches and applications.\\\" Journal of Computational Physics 193.2 (2004): 357-397.\\n\\n[2] El Haber, George, et al. \\\"Deep learning model to assist multiphysics conjugate problems.\\\" Physics of Fluids 34.1 (2022).\\n\\n[3] Kennedy, Marc C., and Anthony O'Hagan. \\\"Bayesian calibration of computer models.\\\" Journal of the Royal Statistical Society: Series B (Statistical Methodology) 63.3 (2001): 425-464.\\n\\n[4] Friedman, Samuel, and Douglas Allaire. \\\"Quantifying Model Discrepancy in Coupled Multi-Physics Systems.\\\" International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. Vol. 50077. American Society of Mechanical Engineers, 2016.\\n\\n[5] Willmann, Harald, et al. \\\"Bayesian calibration of coupled computational mechanics models under uncertainty based on interface deformation.\\\" Advanced modeling and simulation in engineering sciences 9.1 (2022): 24.\\n\\n[6] Subramanian, Abhinav, and Sankaran Mahadevan. \\\"Error estimation in coupled multi-physics models.\\\" Journal of Computational Physics 395 (2019): 19-37.\"}", "{\"title\": \"Official Response to Reviewer HRRS\", \"comment\": \"We thank the reviewer for the constructive review, and are glad that the reviewer recognizes the novelty of our method. In the following, we address the points the reviewer raised about, and answer the questions.\\n\\n\\n>Re1: The algorithm lacks clarity, and the model structure is not sufficiently detailed.\", \"answer\": \"Both Experiment 1 and Experiment 2 deal with time-dependent systems. In these experiments, we treat time as an additional dimension, and **generate the state trajectory across all time simultaneously**. Consequently, for Experiment 1, which is a one-dimensional spatial-temporal problem, we employ a two-dimensional network; for Experiment 2, which is a two-dimensional spatial-temporal problem, we utilize a three-dimensional network.\"}", "{\"title\": \"Official Response to Reviewer 1Pcu (1)\", \"comment\": \"We thank the reviewer for the constructive review, and are glad that the reviewer recognizes the novelty and generality of our method, the practicality of the dataset. In the following, we address the points the reviewer raised about, and answer the questions.\\n\\n\\n>Re1: The multi-level \\u201cfor loops\\u201d can cause a significant computational bottleneck, limiting the practical applicability of the method. While the author briefly acknowledged this limitation, they did not provide any metrics on the method\\u2019s computational efficiency, particularly in comparison to surrogates, which are a key consideration for physical simulations and their surrogates.\", \"answer\": \"Thank you for your comment. In fact, before starting on this work, we had considered using existing compositional generation methods. However, we found that these methods are not applicable to our problem. The most relevant studies to our issue are those by Du et al. [1] in the field of image generation and Wu et al. [2] who further extended the concept to inverse design problems in science. Their focus is on how **a single object is influenced by multiple factors**, such as generating images that meet various requirements in image generation or predicting the fluid fields and enhancing the lift-to-drag ratio under the influence of two wings in inverse design. However, our problem involves **multiple objects**, such as multiple physical processes and components, requiring the capture of interactions between these processes or components. Therefore, their methods are not suitable for our scenario.\\n\\n[1]Du, Yilun, et al. \\\"Reduce, reuse, recycle: Compositional generation with energy-based diffusion models and mcmc.\\\" ICML, 2023.\\n\\n[2]Wu, Tailin, et al. \\\"Compositional Generative Inverse Design.\\\" ICLR, 2024.\"}", "{\"title\": \"Official Response to Reviewer 1Pcu (3)\", \"comment\": \">Re4: Although interesting, it remains unclear how models trained on small structures can effectively extrapolate to larger structures. For instance, a model trained on a single pendulum cannot easily predict the behavior of a double pendulum, as the interactions within coupled systems add complexity beyond a simple combination. Could you clarify the difference between this scenario and the benchmarks used in paper, or at least explain if this method can be applied to this scenario?\", \"answer\": \"Thanks for your insightful feedback. That is a very good question. We have added a description of the application scenarios in the appendix.\\n\\nFirstly, from a theoretical standpoint, in the derivation of Section 3.2, we made an assumption: we consider the solution on a multi-component structure to be an undirected graph that satisfies the local Markov property, meaning that any two non-adjacent variables are conditionally independent given all other variables. Using this property, we derived Equation 14. We believe this assumption is applicable to most problems because physical fields are **continuous** in space, and the information exchange between any two points must be transmitted through the points in between. Essentially, our method learns **local conditional probability distributions**. In the inference time, these local conditional probabilistic models can compose to generate **globally unseen distributions**, although locally they are **in distribution**. This is the core reason why our method can extrapolate to larger structures in inference.\\n\\n\\nOn the other hand, we find that there is a class of problems to which current methods cannot be directly applied, which is the partial differential equation that requires determining eigenvalues. The eigenvalues of these problems vary with different systems, and the relationships we learn on small structures may not be applicable to large structures. Solutions to these problems may be similar to numerical algorithms, requiring the addition of an eigenvalue search process, which will be undertaken in future work.\\n\\nFrom a practical implementation perspective, for a complex structure, it is necessary to clearly determine its **basic components** and the relationships between these components and their surrounding components, so that we can understand how the components are affected by their surrounding components. In addition, training data must encompass all possible scenarios that each component in a large structure might encounter, such as all possible boundary conditions and the relationships with surrounding components, so that in the inference, they are **locally in distribution**.\\n\\nLastly, regarding the double pendulum problem you mentioned, we indeed cannot infer the behavior of a double pendulum from the data of a single pendulum. In a double pendulum system, the balls exist in two states: one end in contact with wall and the other with a ball; or one end with a ball and the other empty. In contrast, a single pendulum system only has one end in contact with wall and the other empty. Therefore, it is not possible to obtain the solution for a double pendulum by combining data from a single pendulum. For the more general n-pendulum system, the balls exist in three states: one end in contact with the wall and the other with a ball; one end with a ball and the other empty; both ends with balls. These states are all present in a triple pendulum system, so perhaps we can simulate an n-pendulum system with data from a triple pendulum system. However, in practice, the chaotic nature of the system may amplify the errors between our method and the ground truth, leading to failure. In addition, solving a triple pendulum system is already quite complex, making verification very difficult.\"}", "{\"title\": \"A gentle reminder: please respond to our rebuttal\", \"comment\": \"Dear Reviewer HRRS,\\n\\nThank you for your time and effort in reviewing our work. We have carefully considered your detailed comments and questions, and we have tried to address all your concerns accordingly.\\n\\nAs the deadline for author-reviewer discussions is approaching, could you please go over our responses? If you find our responses satisfactory, we hope you could consider adjusting your initial rating. Please feel free to share any additional comments you may have.\\n\\nThank you!\\n\\nAuthors\"}", "{\"comment\": \"Dear reviewer,\\n\\nPlease make to sure to read, at least acknowledge, and possibly further discuss the authors' responses to your comments. Update or maintain your score as you see fit.\\n\\nThe AC.\"}", "{\"title\": \"Official Response to Reviewer 9cxV (3)\", \"comment\": \">Re3: \\\"there do not exist utilized machine learning methods for multi-component simulation\\\". I don't really understand the novelty since, technically, UNets [Ronneberger et al. 2015], FNO [Li et al. 2022], Transformer [Mccabe et al. 2023], diffusion [Kohl et al. 2024] models are already used on multi-component simulations.\", \"answer\": \"In our responses to the above comments Re1, we have provided extensive explanations on the multi-component issue. In short, multi-component simulation refers to the simulation of complex structures composed of multiple similar components. Multi-component simulation typically requires that in the inference time, it can due with **larger structures** than previously seen. As was discussed in the response to the above Re1, previous benchmarks do not possess multi-component datasets, and the methods you mentioned were not specifically designed for multi-component simulations, yet some of them may be suitable baselines. Here, we provide a detailed discussion on the mentioned methods about their applicability to multi-component simulations:\\n- For the U-Net [1], although it has been applied to many learning physical simulation works before, it requires the data to form a regular grid. In contrast, multi-component problems typically have non-rectangular grids. Thus, U-Net is in general not applicable to multi-component problems.\\n- For the FNO [2], due to its FFT and IFFT being global operations in the full domain, it requires that the training and inference have the same domain structure. In contrast, multi-component simulation typically requires to simulate larger structures than seen in training, which FNO cannot handle.\\n- For the Transformer [3], it has the potential to address multi-component simulations, since it allows to handle larger input length during inference. The standard transformer [3,7] without graph structure embedding cannot handle multi-component simulation well, since they does not consider the complex graphical interactions between the components. On the other hand, graph transformers, such as SAN [5] which utilizes the eigenvectors as position embeddings, has the potential to address multi-component simulations, which we have added as a baseline in the revised manuscript.\\n- For the diffusion [4], the three tasks addressed in this article are: Incompressible Wake Flow, Transonic Cylinder Flow, and Isotropic Turbulence. The first two tasks simulate the flow of fluid around a cylinder in a rectangular domain, while the third task involves turbulence within a cuboid, with the computational domain being a 2D slice from a 3D dataset. These tasks are not related to multi-component problems. However, if the simulation involves a complex phenomenon such as fluid flowing around a **bundle** of rods which is a multi-component problem, our method might be applicable for solving it.\\n\\nWe have also modified the phrasing of this sentence in the Related Work to make it more accurate. We have also added the baselines of Graph Transformer SAN [5] and Graph Neural Network GIN [6] in Experiment 3 to compare the performance. Due to the uniformity of graph structures in all training data and the fact that SAN learns a global relationship, SAN fails to predict larger structures. We see that our method that is specifically designed for multi-component simulation, significantly outperforms the baselines. Part of Table 3 is also provided as follows (the unit is $1\\\\times 10^{-2}$):\\n| Method | single T | single \\u03b5 | 16-component T | 16-component \\u03b5 | 64-component T | 64-component \\u03b5 |\\n| ------------------ | -------- | -------- | -------------- | -------------- | -------------- | -------------- |\\n| GIN | - | - | 1.96 | 3.18 | 4.63 | 7.02 |\\n| SAN | - | - | 0.114 | 16.5 | 100 | 11800 |\\n| MultiSimDiff (ours) | 0.107 | 0.303 | 0.213 | 1.03 | 0.759 | 1.94 |\\n\\n\\n[1]Ronneberger, et al. \\\"U-net: Convolutional networks for biomedical image segmentation.\\\" 2015.\\n\\n[2]Wen, Gege, Z Li, et al. \\\"U-FNO\\u2014An enhanced Fourier neural operator-based deep-learning model for multiphase flow.\\\" 2022.\\n\\n[3]Alwahas, Areej, Kasper Johansen, and Matthew McCabe. \\\"Crop Type Mapping Using Self-supervised Transformer with Energy-based Graph Optimization in Data-Poor Regions.\\\" 2023.\\n\\n[4]Kohl, Georgi, et al. \\\"Benchmarking autoregressive conditional diffusion models for turbulent flow simulation.\\\" 2024.\\n\\n[5] Kreuzer, Devin, et al. \\\"Rethinking graph transformers with spectral attention.\\\" Advances in Neural Information Processing Systems 34 (2021): 21618-21629.\\n\\n[6] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019.\\n\\n[7] Vaswani, A. \\\"Attention is all you need.\\\" Advances in Neural Information Processing Systems (2017).\"}", "{\"title\": \"A gentle reminder: please respond to our rebuttal\", \"comment\": \"A gentle reminder: please respond to our rebuttal\\nDear Reviewer xu7A,\\n\\nThank you for your time and effort in reviewing our work. We have carefully considered your detailed comments and questions, and we have tried to address all your concerns accordingly.\\n\\nAs the deadline for author-reviewer discussions is approaching, could you please go over our responses? If you find our responses satisfactory, we hope you could consider adjusting your initial rating. Please feel free to share any additional comments you may have.\\n\\nThank you!\\n\\nAuthors\"}", "{\"title\": \"Official Response to Reviewer Fscm\", \"comment\": \"We thank the reviewer for the constructive review, and are glad that the reviewer recognizes the novelty of our method and the credibility of the experimental results. In the following, we address the points the reviewer raised about, and answer the questions.\\n\\n\\n>Re1: In the comparative experiment, is the surrogate model too simple? Or can you compare your model with a more complex model? Or explain the current popularity of the surrogate model.\", \"answer\": \"Thank you for pointing out this issue. We have adjusted the layout to reduce the presence of blank lines.\"}", "{\"title\": \"Official Response to Reviewer xu7A (1)\", \"comment\": \"We thank the reviewer for the constructive review, and are glad that the reviewer recognizes the novelty and effectiveness of our method. In the following, we address the points the reviewer raised about, and answer the questions.\\n\\n\\n>Re1: Only accuracy is compared in the examples. Efficiency comparison is also important. Specifically, a fair comparison with standard numerical methods (such as FEM or FVM) is important.\", \"answer\": \"We have employed DDIM for accelerated sampling, and the results are shown in appendix H. We also provide part of the Tables below. We see that compared to the original DDPM, DDIM sampling can achieve a **10-fold** and **5-fold** acceleration for multi-physics and multi-component problems, respectively, while maintaining accuracy. For DDPM, the diffusion step of all experiments is 250.\\n\\n\\n| Exp1 | u (decoupled) | u (coupled) | v (decoupled) | v (coupled) | runtime|\\n| ------------- | ------------- | ----------- | ------------- | ----------- | ----------- |\\n| Original DDPM | 0.0119 | 0.0141 | 0.0046 | 0.0174 | 11.5 s |\\n| $S=25$ | 0.0119 | 0.0151 | 0.0081 | 0.0192 | 1.15s |\\n\\n| Exp2 | Neutron (decoupled) | Neutron (coupled) | Solid (decoupled) | Solid (coupled) | Fluid (decoupled) | Fluid (coupled) | runtime |\\n| ------------- | ------------------- | ----------------- | ----------------- | --------------- | ----------------- | --------------- | ----------- |\\n| Original DDPM | 0.487 | 1.97 | 0.108 | 2.87 | 0.303 | 3.91 | 36.3 s |\\n| $S=25$ | 0.552 | 2.03 | 0.142 | 3.64 | 0.343 | 4.08 | 3.63s |\\n\\n| Exp3 | Single T | Single \\u03b5 | 16-component T | 16-component \\u03b5 | 64-component T | 64-component \\u03b5 | runtime (64-component) |\\n| ------------- | -------- | -------- | -------------- | -------------- | -------------- | -------------- | -------------- |\\n| Original DDPM | 0.107 | 0.303 | 0.213 | 1.03 | 0.759 | 1.94 | 19.2 s |\\n| $S=50$ | 0.158 | 0.337 | 0.669 | 1.87 | 0.865 | 2.31 | 3.84 s |\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you for your constructive comments and for raising our manuscript's score. Your insights have been invaluable in enhancing the quality of our work.\\n\\nRegarding the multiphysics, your understanding is correct. Indeed, in mathematics, multiphysics problems are represented by a set of coupled partial differential equations (PDEs) that describe multiple physical processes. These coupled PDEs, which describe a single physical process, are generally solved together. However, in the engineering field, coupled PDEs that describe multiple physical processes are typically solved separately through data transfer methods due to factors such as the difference in spatial and temporal resolution (as mentioned in lines 76-78 of the manuscript). For instance, in reactor engineering, the nuclear-thermal coupling involves solving both the neutron physics field and the fluid field. The fluid field is addressed using the Navier-Stokes (NS) equations, which include multiple PDEs to describe the fluid's flow and heat transfer phenomena, and are typically solved using the finite volume method on a fine mesh. The neutron physics field is addressed using a set of diffusion equations, which are generally solved using finite difference methods or Monte Carlo techniques on a very coarse mesh. Due to the different solution methods and spatial-temporal resolutions, these two fields can only be solved separately. Previous studies on coupled PDEs focused on a single physical process, whereas Experiment II in this paper addresses three physical processes.\\n\\nRegarding the multi-component, the term \\\"component\\\" has been defined in the manuscript at line 39 as follows: \\\"Component is defined as: a repeatable basic unit that makes up a complete structure.\\\" We also give a further explanation at the beginning of Section 3.2. \\n\\nWe will incoporate the above discussion in the manuscript.\"}", "{\"summary\": \"This paper presents a data-driven approach for multiphysics and multi-component simulations using compositional generative diffusion models. The proposed method tackles the complexities of coupled simulations by learning energy functions that model conditional probabilities between physical fields and components. The proposed method is validated on 3 tasks, including reaction-diffusion, nuclear thermal coupling, and thermal-mechanical simulations of prismatic fuel elements. The results show improved accuracy and reduced error over traditional surrogate models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The use of compositional generative models for multiphysics and multi-component simulations is a fresh approach in this field, addressing the challenges of coupling multiple physical domains.\\n2. This paper demonstrates the capability of MultiSimDiff to predict coupled interactions from models trained on decoupled data, simplifying data requirements and model development.\", \"weaknesses\": \"1. Only accuracy is compared in the examples. Efficiency comparison is also important. Specifically, a fair comparison with standard numerical methods (such as FEM or FVM) is important.\\n2. The iterative nature of the diffusion process in MultiSimDiff can be computationally intensive, particularly for multiphysics simulations due to the high complexity. \\n3. Since the model is trained on decoupled data, the approach's success may depend on the quality of this initial data. The paper could benefit from further discussion on the robustness of MultiSimDiff when the decoupled data does not closely resemble the coupled dynamics.\", \"questions\": \"1. There is only one example for demonstrating training one small structure simulation data and predicting larger structures. Graph Neural Network also has similar abilities. How's the performance comparison?\\n2. How does MultiSimDiff handle cases where the decoupled training data does not closely match the dynamics of coupled data? Would the model performance degrade significantly?\\n3. Could you clarify if there are specific scenarios where traditional numerical solvers might still outperform MultiSimDiff in terms of accuracy or computational cost?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Response to Reviewer 1Pcu (2)\", \"comment\": \">Re3: In some experiments, the combination of \\u201csurrogate+x\\u201d outperformed the proposed methods. .\", \"answer\": \"Firstly, we would like to clarify that there is a significant difference between the training data and the test data in our experiments, and our primary focus is on the model's performance on the test data. When dealing with multiphysics problems, we pay special attention to the model's predictive capabilities on coupled data; while in multi-component problems, we focus on the accuracy of the model's predictions for larger structures than in training. In the original submission, our algorithm performed optimally in experiment 2 and experiment 3. However, in experiment 1, when the network architecture is FNO, the surrogate model shows superior performance in predicting the physical quantity \\\"v\\\". We must apologize for the error that occurred in the previous code, during the training of the FNO-2D diffusion model, where the diffusion time step was not correctly inputted, leading to significant prediction errors in our method. Now Table 1 has been revised in the latest manuscript, and **our algorithm demonstrates optimal performance in all experiments**. Part of Table 1 is also provided as follows:\\n\\n| Method | u (decoupled) | u (coupled) | v (decoupled) | v (coupled) |\\n| ----------------------------- | ------------- | ----------- | ------------- | ----------- |\\n| surrogate + FNO | 0.0669 | 0.0600 | 0.0080 | 0.0320 |\\n| **MultiSimDiff (ours) + FNO** | 0.0270 | **0.0290** | 0.0102 | **0.0264** |\"}", "{\"comment\": \"Dear all,\\n\\nThe deadline for the authors-reviewers phase is approaching (December 2).\\n\\n@For reviewers, please read, acknowledge and possibly further discuss the authors' responses to your comments. While decisions do not need to be made at this stage, please make sure to reevaluate your score in light of the authors' responses and of the discussion.\\n\\n- You can increase your score if you feel that the authors have addressed your concerns and the paper is now stronger.\\n- You can decrease your score if you have new concerns that have not been addressed by the authors.\\n- You can keep your score if you feel that the authors have not addressed your concerns or that remaining concerns are critical.\\n\\nImportantly, you are not expected to update your score. Nevertheless, to reach fair and informed decisions, you should make sure that your score reflects the quality of the paper as you see it now. Your review (either positive or negative) should be based on factual arguments rather than opinions. In particular, if the authors have successfully answered most of your initial concerns, your score should reflect this, as it otherwise means that your initial score was not entirely grounded by the arguments you provided in your review. Ponder whether the paper makes valuable scientific contributions from which the ICLR community could benefit, over subjective preferences or unreasonable expectations.\\n\\n@For authors, please respond to remaining concerns and questions raised by the reviewers. Make sure to provide short and clear answers. If needed, you can also update the PDF of the paper to reflect changes in the text. Please note however that reviewers are not expected to re-review the paper, so your response should ideally be self-contained.\\n\\nThe AC.\"}", "{\"metareview\": \"The reviewers are divided (6-5-5-8-5-5) about the paper, leaning more towards rejection than acceptance. The paper presents a data-driven approach for multiphysics and multi-component simulations using compositional generative diffusion models. Concerns have been raised regarding the presentation of the method. The author-reviewer discussion has been constructive and has led to a number of clarifications, but some of the reviewers' concerns remain. Regarding the experimental validation, concerns were raised regarding the computational efficiency of MultiSimDiff and its robustness to coupled interactions. The authors have provided new results using DDIM for accelerated sampling, which show a significant speedup, and have provided further results exploring the case of coupled data. Finally, the novelty of the approach has been questioned, but the authors have provided clarifications. Unfortunately, some of the reviewers have not replied back to the authors' clarifications. Given the lack of a strong signal towards acceptance, the remaining concerns, and the mixed reviews, I recommend rejection. I encourage the authors to address the reviewers' comments and to resubmit to a future conference.\", \"additional_comments_on_reviewer_discussion\": \"The author-reviewer discussion has been constructive and has led to a number of clarifications, but some of the reviewers' concerns remain.\"}", "{\"title\": \"Official Response to Reviewer 9cxV (1)\", \"comment\": \"We thank the reviewer for the constructive review, and are glad that the reviewer recognizes the novelty of our method. In the following, we address the points the reviewer raised about, and answer the questions.\\n\\n>Re1: claim (2): There are several datasets in the literature that can be described as simulations of multiphysics, multi-component, such as PDEBench [Takamoto et al., 2022]. The authors should better precise why the new datasets they propose is a contribution to the literature.\", \"answer\": \"Thanks for your comments. Multiphysics simulation refers to the simultaneous consideration and modeling of multiple **physical processes**, such as heat conduction, fluid flow, and structural mechanics, each of which may involve one or multiple physical fields. Therefore, multiphysics does not simply mean multiple physical fields, i.e., there are many scenarios that involve several physical fields, but only one physical process. In the \\\"PDEBench\\\" [1], although a series of partial differential equation (PDE) cases are provided, including convection, diffusion reaction, and Navier-Stokes equations, there are no cases of multiphysics simulation, such as coupling the reaction-diffusion equation with the Navier-Stokes equations, which is very common in combustion dynamics. In this manuscript, Experiment 1 is to calculate the reaction-diffusion equation, which does exist in the literature [1]. But strictly speaking, this does not constitute a multiphysics problem because they all belong to the concentration field; it is merely for validating the proposed algorithm. We explain this in the updated manuscript. Experiment 2 is a simplified version of the nuclear thermal coupling problem in the nuclear engineering field, encompassing three physical processes, with the fluid field solved using the finite volume method and the other two fields solved using the finite element method. This problem covers most coupling types: strong and weak coupling between physical processes, regional and interface coupling, and unidirectional and bidirectional coupling, making it a very representative multiphysics simulation case. To our knowledge, such cases **have not been found** in literature [1] or other literature.\\n\\nMulti-component simulation refers to the simulation of complex structures composed of multiple similar components, which is very common in civil, aerospace, and nuclear engineering fields. Multi-component simulation typically requires that in the inference time, it can due with **larger structures** than previously seen. For example, the reactor core typically consists of hundreds or thousands of fuel elements arranged in a square or hexagonal pattern. In fuel cells, the repeated array of ribs directly affects the cell's performance. The case domains in the PDEBench [1] are all on **regular** rectangular domains with relatively simple geometric structures and **no interaction** between multiple structural components. On the other hand, Experiment 3 in this manuscript solves the thermal and mechanical properties of multiple prismatic fuel elements, with mechanical and thermal interactions between adjacent fuel elements. Our predicted structure is more complex than in training, involving a large structure of 64 elements, each containing 804 grid points. Therefore, we consider this problem to be a typical case of multi-component problems with a certain level of complexity.\\n\\nWe add appendix K and provide Table 21 to outline the principal characteristics of these datasets and compare them with Reference [1]. The number of physics processes and the number of components are all **1** in Reference [1]. The table is also provided in next responce.\\n\\n[1] Takamoto, Makoto, et al. \\\"PDEBench: An extensive benchmark for scientific machine learning.\\\" Advances in Neural Information Processing Systems 35 (2022): 1596-1611.\"}", "{\"title\": \"A gentle reminder: please respond to our rebuttal\", \"comment\": \"Dear Reviewer 9cxV,\\n\\nThank you for your time and effort in reviewing our work. We have carefully considered your detailed comments and questions, and we have tried to address all your concerns accordingly.\\n\\nAs the deadline for author-reviewer discussions is approaching, could you please go over our responses? If you find our responses satisfactory, we hope you could consider adjusting your initial rating. Please feel free to share any additional comments you may have.\\n\\nThank you!\\n\\nAuthors\"}", "{\"comment\": \"Your answer has cleared my doubts to a certain extent. I am very grateful and I have improved my score.\"}", "{\"title\": \"A gentle reminder: please respond to our rebuttal\", \"comment\": \"Dear Reviewer 6eU9,\\n\\nThank you for your time and effort in reviewing our work. We have carefully considered your detailed comments and questions, and we have tried to address all your concerns accordingly.\\n\\nAs the deadline for author-reviewer discussions is approaching, could you please go over our responses? If you find our responses satisfactory, we hope you could consider adjusting your initial rating. Please feel free to share any additional comments you may have.\\n\\nThank you!\\n\\nAuthors\"}", "{\"comment\": \"I thank the authors for their detailed response to my points, which helped clarify important aspects of the paper. I will raise my score to reflect this.\\n\\nHowever, I once again encourage the authors to formalize what is meant by \\\"multi-component\\\" and \\\"multi-physics\\\".\\nIn your response, you write, \\\"Multiphysics simulation refers to the simultaneous consideration and modeling of multiple physical processes\\\", which is clear. But, how does this differ from coupled PDEs? While the reader might infer the distinction, it should be explicitly stated to better motivate the choice of baselines.\\nMore importantly, you define \\\"Multi-component simulation\\\" as \\\"the simulation of complex structures composed of multiple similar components\\\". This definition is inadequate because it does not clearly explain what a \\\"component\\\" entails.\\n\\nClarifying these is essential for understanding the task presented in the paper and for justifying the choice of baselines and the paper's contributions.\"}", "{\"comment\": \"Dear reviewer,\\n\\nPlease make to sure to read, at least acknowledge, and possibly further discuss the authors' responses to your comments. Update or maintain your score as you see fit.\\n\\nThe AC.\"}", "{\"title\": \"Official Response to Reviewer 9cxV (4)\", \"comment\": \">Re4: Paragraph 3.2 is confusing. In particular, there can be confusion between \\\"multiple fields\\\", \\\"multiphysics\\\" and \\\"multi-components\\\". After having read the paper carefully, it seems to me that the main difference between \\\"multiphysics\\\" and \\\"multi-components\\\" is in the way they are treated computationally. The multiple components being treated as interchangeable, in the sense that the same model is used for the conditioning of one on the others, while multiple \\\"physics\\\" do not assume such interchangeability.\", \"answer\": \"Thanks for the question. Yes, we actually learn the gradient of the energy function through the diffusion model. We have updated the manuscript and now it clearly states that we learn the gradient of the energy.\"}", "{\"summary\": \"The paper proposes a diffusion-based model for simulating multiphysics, multi-component systems. The model learns the conditional distributions of each field/component given the others. During inference, the model iteratively denoises each field/component in a manner similar to diffusion models. The model is applied on reaction-diffusion, nuclear thermal and prismatic fuel element datasets.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is well-written and clear, presenting a novel combination of ideas by proposing to use diffusion models for simulating multiphysics, multi-component systems.\", \"weaknesses\": [\"the main claim of the papers (2) and (3) are questionable. See the below two points\", \"claim (2): There are several datasets in the literature that can be described as simulations of multiphysics, multi-component, such as PDEBench [Takamoto et al., 2022]. The authors should better precise why the new datasets they propose is a contribution to the literature.\", \"claim (3): More importantly, based on the results, it is difficult to determine whether the proposed method is advantageous for other multiphysics, multi-component systems, given the fact that it may not be competitive in terms of computational cost (see below question)\", \"Minor remarks.\", \"l102. \\\"a process we have mathematically proven\\\", I don't think a \\\"process\\\" qualifies as being able to be \\\"mathematically proven\\\". You should precise what is \\\"mathematically proven\\\"\", \"l107. \\\"This reverse diffusion process is also mathematically validated\\\", same remark\", \"l137. \\\"there do not exist utilized machine learning methods for multi-component simulation\\\". I don't really understand the novelty since, technically, UNets [Ronneberger et al. 2015], FNO [Li et al. 2022], Transformer [Mccabe et al. 2023], diffusion [Kohl et al. 2024] models are already used on multi-component simulations.\", \"l251. Paragraph 3.2 is confusing. In particular, there can be confusion between \\\"multiple fields\\\", \\\"multiphysics\\\" and \\\"multi-components\\\". After having read the paper carefully, it seems to me that the main difference between \\\"multiphysics\\\" and \\\"multi-components\\\" is in the way they are treated computationally. The multiple components being treated as interchangeable, in the sense that the same model is used for the conditioning of one on the others, while multiple \\\"physics\\\" do not assume such interchangeability.\", \"l253. The paper would benefit greatly by providing at this stage a clear examples of what a \\\"component\\\" is.\"], \"questions\": [\"You mention several times that the model \\\"learns the energy\\\", as well as the conditional energies, but do you actually learn the energy function E? Or, do you learn the gradient of the energy function (that is the scores, and conditional scores), thanks to the denoiser?\", \"As your mention, your method seems computationally intensive compared to a FNO for example, not just because of the denoising steps but also because of the loop over the physical fields. Do you have a rough estimation of how it compares in terms of execution time and in terms of FLOPs?\", \"In algorithms 1 and 2, could you clarify what the \\\"outer inputs\\\" are in comparison to the \\\"physical fields\\\"?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Response to Reviewer 9cxV (2)\", \"comment\": \"| PDE | Nd | Time | Computational domain | Number of physics processes | Number of Component |\\n| ----------------------------------------------------------------------------- | --- | ---- | -------------------- | ------------------------ | ------------------- |\\n| advection | 1 | yes | Line | 1 | 1 |\\n| Burgers' | 1 | yes | Line | 1 | 1 |\\n| reaction-diffusion | 1 | yes | Line | 1 | 1 |\\n| reaction-diffusion | 2 | yes | Rectangle | 1 | 1 |\\n| diffusion-sorption | 1 | yes | Line | 1 | 1 |\\n| compressible Navier-Stokes | 1 | yes | Line | 1 | 1 |\\n| compressible Navier-Stokes | 2 | yes | Rectangle | 1 | 1 |\\n| compressible Navier-Stokes | 3 | yes | Cube | 1 | 1 |\\n| incompressible Navier-Stokes | 2 | yes | Rectangle | 1 | 1 |\\n| Darcy flow | 2 | no | Rectangle | 1 | 1 |\\n| shallow-water | 2 | yes | Rectangle | 1 | 1 |\\n| **reaction-diffusion (Exp1)** | 1 | yes | Line | 1 | 1 |\\n| **heat conduction + neutron diffusion + incompressible Navier-Stokes (Exp2)** | 2 | yes | 3 Rectangle | 3 | 1 |\\n| **heat conduction + mechanics (Exp3)** | 2 | no | Irregular domain | 2 | 16 / 64 |\\n\\nThus, we believe that our datasets in Exp2 and Exp3 constitute a contribution to the literature, which consists of multiphysics and multi-component simulations not present in previous benchmarks.\\n\\n>Re2: \\\"a process we have mathematically proven\\\", I don't think a \\\"process\\\" qualifies as being able to be \\\"mathematically proven\\\". You should precise what is \\\"mathematically proven\\\";\\n\\\"This reverse diffusion process is also mathematically validated\\\", same remark\", \"answer\": \"Thank you for pointing out the inaccuracies in our previous statement. Indeed, a process should not be described as \\\"be proven\\\". What we want to express here is that we have mathematically derived the principles why our algorithm can obtain coupled solutions and large structure solutions. This has been corrected in the new manuscript.\"}", "{\"title\": \"Official Response to Reviewer 6eU9 (1)\", \"comment\": \"We appreciate you pointing out the research field of Bayesian calibration, which could be beneficial for our future research. We have read through all the articles you have recommended, but upon thorough analysis, we find that these Bayesian-based methods primarily focus on analyzing **well-established** multi-physics simulation systems, including uncertainty analysis, error calibration, and decoupling approximations of coupled systems.\\n\\nHowever, the motivation behind our method when applied to multiphysics simulation is that constructing coupled programs for complex problems can be exceedingly **intricate** and computationally inefficient. Our aim is to directly predict coupled solutions through decoupled programs, thereby eliminating the need for coupled program construction. For multi-component simulation, directly simulating the overall structure requires high computational cost and may encounter difficulties in convergence due to the increase in degrees of freedom. Our aim is to predict the large structure through small structure data. In the fields of engineering and scientific research, virtually all physicochemical processes involve the combined effects of multiple physical processes. Multi-component is common in large complex systems such as aircraft, buildings, and nuclear power plants, as well as in systems with highly intricate internal structures like chips and batteries. Rapid and accurate solutions to multiphysics and multi-component problems can lead to a better understanding of the physical behavior of systems, thereby enhancing their economic and reliability performance.\\n\\nThus, we believe our method is fundamentally **different** from the work described in these articles. We use decoupled training data to predict coupled solutions, and small structure data to predict large structures. This method shows great significance in science and engineering, representing a novel contribution.\\n\\nThe detailed analysis of the articles you mentioned is in the next response.\"}", "{\"title\": \"A gentle reminder: please respond to our rebuttal\", \"comment\": \"Dear Reviewer 1Pcu,\\n\\nThank you for your time and effort in reviewing our work. We have carefully considered your detailed comments and questions, and we have tried to address all your concerns accordingly.\\n\\nAs the deadline for author-reviewer discussions is approaching, could you please go over our responses? If you find our responses satisfactory, we hope you could consider adjusting your initial rating. Please feel free to share any additional comments you may have.\\n\\nThank you!\\n\\nAuthors\"}", "{\"summary\": \"Article summary: This article is about the application of diffusion model in engineering model (mainly thermal and material science). Because engineering problems often involve multiple physical processes (that is, multiple complex processes are involved when modeling), this article establishes a multi-process model to deal with different physical problems. (Mostly reflected in reaction diffusion and nuclear thermal coupling problems)\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"Advantages:\\n\\n1. The starting point of this article is very novel, hoping to use limited data to deal with more physical problems at the same time. Especially a mechanical structure and thermal problem, and the performance is very good from the results of the article. (Especially page 18)\\n\\n2. The adaptability of the ML model to the actual parameters of the engineering problem makes me feel that the design of the entire model is justified, and the motivation is clear and credible. (That is, the content in the appendix, Tables 10-13)\\n\\n3. Strict model comparison, such as using consistent hyperparameters and settings, and having different parameter designs in different engineering problems (such as porous/multi-part materials).\\n\\n4. The superiority of the model, from the perspective of energy (density probability form), some of the results obtained are indeed very good\", \"weaknesses\": \"Disadvantages:\\n\\n1. In the comparative experiment, is the surrogate model too simple? Or can you compare your model with a more complex model? Or explain the current popularity of the surrogate model.\\n\\n2. I would like to know whether the model you designed has other innovations in structure and implementation compared with the diffusion model, in addition to the differences in parameter settings and application issues. I feel that the model is not deep enough, judging from the algorithms shown in the first six pages. (I will consider this again)\\n\\n3. I also found that your experiments often combine your model with other NNs (FNO, etc.). Why don't you use your model to implement it independently? Because I am worried that NNs such as FNO will additionally correct the errors of your model (if they occur).\", \"questions\": \"There are some unnecessary blank lines in the article that can be corrected. I will be happy to improve my score in subsequent discussions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None. This is original work and there are no ethical issues\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer,\\n\\nPlease make to sure to read, at least acknowledge, and possibly further discuss the authors' responses to your comments. Update or maintain your score as you see fit.\\n\\nThe AC.\"}", "{\"comment\": \"Dear reviewer,\\n\\nPlease make to sure to read, at least acknowledge, and possibly further discuss the authors' responses to your comments. Update or maintain your score as you see fit.\\n\\nThe AC.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for addressing the comments and suggestions raised during the review process. I appreciate your detailed responses and the revisions made to improve the manuscript. Upon careful evaluation of the updated submission, I have decided to retain the original marks assigned during the review. Thank you for your effort and engagement throughout the review process.\"}", "{\"title\": \"A gentle reminder: please respond to our rebuttal Dear Reviewer xu7A\", \"comment\": \"Thank you for your time and effort in reviewing our work. We have carefully considered your detailed comments and questions, and we have tried to address all your concerns accordingly. As the deadline for author-reviewer discussions is approaching, could you please go over our responses? If you find our responses satisfactory, we hope you could consider adjusting your initial rating. Please feel free to share any additional comments you may have.\\n\\nThank you!\\n\\nAuthors\"}", "{\"title\": \"A gentle reminder: please respond to our rebuttal\", \"comment\": \"Dear Reviewer Fscm,\\n\\nThank you for your time and effort in reviewing our work. We have carefully considered your detailed comments and questions, and we have tried to address all your concerns accordingly.\\n\\nAs the deadline for author-reviewer discussions is approaching, could you please go over our responses? If you find our responses satisfactory, we hope you could consider adjusting your initial rating. Please feel free to share any additional comments you may have.\\n\\nThank you!\\n\\nAuthors\"}", "{\"summary\": \"The author proposed compositional Multiphysics and Multi-component Simulation with Diffusion models (MultiSimDiff) to overcome the diffculty of solving complex systems\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is good with the originality and create a novel approach on multiphysics simulation\", \"weaknesses\": \"The algorithm lacks clarity, and the model structure is not sufficiently detailed.\\n\\nThe iterative nature of MultiSimDiff, especially in multiphysics simulations, requires multiple diffusion steps for each field, which may lead to slow inference times. This constraint limits its practicality for scenarios requiring rapid predictions. While the authors recognize this issue and propose exploring faster sampling methods in future work, an initial investigation into such techniques within this paper could enhance its contribution.\", \"questions\": \"1. For algorithm 1, it is identical to the alogrithm 2 in the DDPM paper [1]. And also the way identify $z_i$ is confusion when apply the third for loop.\\n2. How time in involved in the alogrithm as you are doing time dependent system\\n\\n\\n\\n\\n[1] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In\\nH. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 6840\\u20136851. Curran Associates, Inc.,\\n2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/\\nfile/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents MultiSimDiff, a novel generative approach for Multiphysics and multi-component simulations, using diffusion models to overcome the limitations of surrogate models. MultiSimDiff framework can be seamlessly integrated with existing backbone architectures, modeling the conditional probability of each physical field or component within a system. By training on decoupled data, it can generate joint solutions through reverse diffusion. This method demonstrates high accuracy, including reaction-diffusion, nuclear thermal coupling and prismatic fuel elements.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The application of novel machine learning techniques, (the use of diffusion models in scientific domains is relatively new), particularly for multiphysics and multi-component simulations.\", \"Trained on small, decoupled datasets, this method can provide solutions for extended, unseen data composed of smaller components, showing great potential for applications across various scientific and engineering fields.\", \"The benchmarks are somewhat new and practical, beyond traditional toy PDE benchmarks.\"], \"weaknesses\": [\"The multi-level \\u201cfor loops\\u201d can cause a significant computational bottleneck, limiting the practical applicability of the method. While the author briefly acknowledged this limitation, they did not provide any metrics on the method\\u2019s computational efficiency, particularly in comparison to surrogates, which are a key consideration for physical simulations and their surrogates.\", \"Although the authors noted the application of various compositional generative models in scientific domains in their related works, no compositional baselines were compared in the experiments.\", \"In some experiments, the combination of \\u201csurrogate+x\\u201d outperformed the proposed methods.\", \"Although interesting, it remains unclear how models trained on small structures can effectively extrapolate to larger structures. For instance, a model trained on a single pendulum cannot easily predict the behavior of a double pendulum, as the interactions within coupled systems add complexity beyond a simple combination. Could you clarify the difference between this scenario and the benchmarks used in paper, or at least explain if this method can be applied to this scenario?\"], \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"See above.\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Response to Reviewer xu7A (2)\", \"comment\": \">Re3: Since the model is trained on decoupled data, the approach's success may depend on the quality of this initial data. The paper could benefit from further discussion on the robustness of MultiSimDiff when the decoupled data does not closely resemble the coupled dynamics.\", \"answer\": \"Thank you for suggesting the baseline. We have incorporated the Graph Isomorphism Network (GIN) [1] and the Spectral Attention Network (SAN) [2], which were suggested by Reviewer 9cxV, as robust benchmarks for graph neural networks (GNNs) and graph transformers, respectively. We also conduct hyperparameter search to obtain the best performance. GIN and SAN are trained on small graphs with 16 components and tested on large graphs with 64 components. Due to the uniformity of graph structures in all training data (i.e.) and the fact that SAN learns a global relationship, SAN fails to generalize to larger structures. We have updated Table 3, the results indicate that our method has significantly better performance. Part of Table 3 is also provided as follows (the unit is $1\\\\times 10^{-2}$):\\n| Method | single T | single \\u03b5 | 16-component T | 16-component \\u03b5 | 64-component T | 64-component \\u03b5 |\\n| ------------------ | -------- | -------- | -------------- | -------------- | -------------- | -------------- |\\n| GIN | - | - | 1.96 | 3.18 | 4.63 | 7.02 |\\n| SAN | - | - | 0.114 | 16.5 | 100 | 11800 |\\n| MultiSimDiff (ours) | 0.107 | 0.303 | 0.213 | 1.03 | 0.759 | 1.94 |\\n\\n\\n[1] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019.\\n\\n[2] Kreuzer, Devin, et al. \\\"Rethinking graph transformers with spectral attention.\\\" Advances in Neural Information Processing Systems 34 (2021): 21618-21629.\"}", "{\"title\": \"Official Response to Reviewer xu7A (3)\", \"comment\": \">Re5: How does MultiSimDiff handle cases where the decoupled training data does not closely match the dynamics of coupled data? Would the model performance degrade significantly?\", \"answer\": \"Firstly, in terms of accuracy, since we currently regard the numerical program's solution as the ground truth, the accuracy can only be as close as possible to the numerical program. Regarding efficiency, for very simple problems, such as the solution of the reaction-diffusion equation in experiment 1, our algorithm does not hold an advantage. However, for slightly more complex cases, like experiments 2 and 3, our algorithm is capable of achieving significant acceleration. The more complex the problem, the higher the efficiency of our algorithm compared to numerical programs.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you for your constructive feedback and for raising our manuscript's score. Your insights have been invaluable in enhancing the quality of our work.\"}", "{\"title\": \"A gentle reminder: please respond to our rebuttal\", \"comment\": \"Dear Reviewer 1Pcu,\\n\\nThank you for your time and effort in reviewing our work. We have carefully considered your detailed comments and questions, and we have tried to address all your concerns accordingly.\\n\\nAs the deadline for author-reviewer discussions is approaching, could you please go over our responses? If you find our responses satisfactory, we hope you could consider adjusting your initial rating. Please feel free to share any additional comments you may have.\\n\\nThank you!\\n\\nAuthors\"}", "{\"summary\": \"This paper addresses an important problem in the area of computational physics: namely, the composition (coupling) of multi-physics and multi-component simulations. The authors present a 'Bayesian approach' to model composition, with Eq 6 being the 'foundation' of their proposed method.\\n\\nThe reviewer agrees with the importance of the problem and with the general (proposed) approach. The reviewer, however, does not find the literature review to be sufficiently broad as the proposed method (as given) has been proposed in the computational mechanics literature they use as motivation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The main strength of this paper is the use of a Bayesian approach to allow simulation scientists to 'decouple' their solvers by characterizing them in a probabilistic way.\", \"weaknesses\": \"The major weakness of the paper is that it is not clear that the method is novel. For those who are embedded in the work of D.A. Knoll and D.E. Keyes (paper referenced by the authors -- good paper), George El Haber, Jonathan Viquerat, Aurelien Larcher, David Ryckelynck, Jose Alves, Aakash Patil, and Elie Hachem (JCP paper referenced), etc., --- these people (e.g., David Keyes) would probably start a history lesson with pointing out the seminar paper of Kennedy and O'Hagan (Bayesian calibration of computer models, Jan 2002) as the starting point of an entire class of methods on using the Bayesian approach for coupling, uncertainty estimation, etc. With the Kennedy and O'Hagan paper as a starting point, you get things like:\", \"https\": \"//www.sciencedirect.com/science/article/pii/S0021999119304206\\n\\nand then particular people like Karen Willcox (UT-Austin), Youssef Marzouk (MIT), etc. and their use of Bayesian methods for \\\"all kinds of things.\\\" \\n\\nGiven that there is a rich history of these methods within the journals referenced by the authors and given that it is difficult to evaluate the novelty of the statements against this 20 year history, the reviewer (at this time) cannot recommend the paper for acceptance.\", \"http\": \"//mcubed.mit.edu/files/public/RT3/2016__Allaire__Quantifying_Model_Discrepancy_in_coupled_multi-physics_systems.pdf\", \"questions\": \"What is the novelty of the method in comparison to the papers mentioned above and more broadly the papers/journals in which these papers reside? The papers mentioned below are not necessarily the seminal papers, but what one gets by googling with keywords associated with the topic and the journals mentioned by the authors.\\n\\nSpecifically, in terms of addressing the weaknesses:\\n\\n+ How does the author's compositional diffusion model approach compare to the Bayesian calibration methods of Kennedy & O'Hagan and subsequent work that builds on this (of which the reviewer has given some, but which is a vast area)?\\n\\n+ Whether and how the author's method of learning conditional energy functions and composing them differs from existing Bayesian coupling approaches? The Bayesian approach as presented seems consistent with what a practitioner might do for uncertainty quantification, but does not replace the weak and strong coupling methods mentioned in the early part of the paper (which is interested in specific instances, not probabilistic statements).\\n\\nThe author is asking If the focus on using decoupled training data to predict coupled solutions, and small structure data to predict large structures, represents a novel contribution.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics concerns.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"A gentle reminder: please respond to our rebuttal Dear Reviewers\", \"comment\": \"Thank you for your time and effort in reviewing our work. We have carefully considered your detailed comments and questions, and we have tried to address all your concerns accordingly.\\nAs the deadline for author-reviewer discussions is approaching, could you please go over our responses? If you find our responses satisfactory, we hope you could consider adjusting your initial rating. Please feel free to share any additional comments you may have.\\n\\nThank you!\\n\\nAuthors\"}", "{\"title\": \"A gentle reminder: please respond to our rebuttal\", \"comment\": \"Dear Reviewer 6eU9,\\n\\nThank you for your time and effort in reviewing our work. We have carefully considered your detailed comments and questions, and we have tried to address all your concerns accordingly.\\n\\nAs the deadline for author-reviewer discussions is approaching, could you please go over our responses? If you find our responses satisfactory, we hope you could consider adjusting your initial rating. Please feel free to share any additional comments you may have.\\n\\nThank you!\\n\\nAuthors\"}", "{\"title\": \"General Response\", \"comment\": \"We thank the reviewers for their thorough and constructive comments. We are glad that the reviewers recognize that our method is novel (xu7a, 9cxv,HRRS,FSCM,1Pcu), our experimental results are convincing (xu7a,FSCM), our dataset is practical (1Pcu,FSCM).\\n\\nBased on the reviewers' valuable feedback, we have conducted a number of additional experiments and revised the manuscript, which hopefully resolves the reviewers' concerns. The major additional experiments and improvements are as follows:\\n\\n1. We add a comparison between the training and testing datasets to highlight their significant differences, as raised by Reviewer xu7A. We calculate Wasserstein distances and use the t-SNE algorithm to visualize this difference in Figures 11, 12, and 13. The results and analysis are in Appendix G. This significant difference between training and testing datasets illustrates the difficulty of the task and also demonstrates the capabilities of our algorithm. For more details, please see the responses to Reviewer xu7A.\\n3. We use DDIM to accelerate sampling and compare the run time of numerical programs, surrogate models, and our method, as raised by Reviewer xu7A, 9cxV, HRRS, 1Pcu. The results and analysis of DDIM sampling are in Appendix H. The comparison of the running time is in Appendix I. We find that our method is efficient and achieves accelerations of 29 and 41 times compared with numerical programs in experiment 1 and experiment 2, respectively.\\n4. We add the application scenarios of our algorithm for multi-component simulation, as suggested by Review 1Pcu. The analysis is in Appendix J. Through analysis, we have gained a deeper understanding of the applicability and reasons behind our algorithm. For more details, see the responses to Reviewer 1Pcu.\\n5. We add two new baselines for multi-component simulation: Graph neural network and Graph transformer, as suggested by Reviewer xu7A, 9cxV. Our algorithm significantly outperforms the baselines, demonstrating its capability for modeling multi-component problems and generalizing to more complex structures. For more details, see the responses to Reviewer xu7A, 9cxV.\\n6. We add an overall description of our datasets and compare them with existed datasets, as suggested by Reviewer 9cxV. The datasets of experiments 2 and 3 are a contribution to the community which offer multiphysics and multi-component aspects. For more details, see the responses to Reviewer 9cxV.\\n7. We have updated some of the expressions in the manuscript to make it easier for the readers to understand what multiphysics and multi-component simulation are focused on in this manuscript, as raised by Reviewer 9cxV. Multiphysics consists of multiple physical processes, where each process may contain one or more fields. Multi-component is a complex structure composed of multiple similar components. Component is defined as: a repeatable basic unit that makes up a complete structure. Multi-component simulation typically requires to generalize to larger structures than in training. The solution on components can also involve multiple physical processes.\\n8. We add an explanation for why other compositional models are not suitable for our tasks, as raised by Reviewer 1Pcu.\"}" ] }
El4Cs8Su3r
LeGrad: An Explainability Method for Vision Transformers via Feature Formation Sensitivity
[ "Walid Bousselham", "Angie Boggust", "Sofian Chaybouti", "Hendrik Strobelt", "Hilde Kuehne" ]
Vision Transformers (ViTs) have become a standard architecture in computer vision. However, because of their modeling of long-range dependencies through self-attention mechanisms, the explainability of these models remains a challenge. To address this, we propose LeGrad, an explainability method specifically designed for ViTs. LeGrad computes the gradient with respect to the attention maps of single ViT layers, considering the gradient itself as the explainability signal. We aggregate the signal over all layers, combining the activations of the last as well as intermediate tokens to produce the merged explainability map. This makes LeGrad a conceptually simple and an easy-to-implement method to enhance the transparency of ViTs. We evaluate LeGrad in various setups, including segmentation, perturbation, and open-vocabulary settings, showcasing its improved spatial fidelity as well as its versatility compared to other SotA explainability methods.
[ "segmentation;gradient-based method" ]
https://openreview.net/pdf?id=El4Cs8Su3r
https://openreview.net/forum?id=El4Cs8Su3r
ICLR.cc/2025/Conference
2025
{ "note_id": [ "jMx9TvrgG3", "g7JTDvd27D", "g2mK413C6p", "YwIWZbTghz", "2zcmVhquIt" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730479936775, 1731049964848, 1731650936576, 1730120504984, 1729258413965 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1851/Reviewer_Vs78" ], [ "ICLR.cc/2025/Conference/Submission1851/Reviewer_Tim1" ], [ "ICLR.cc/2025/Conference/Submission1851/Authors" ], [ "ICLR.cc/2025/Conference/Submission1851/Reviewer_oxTL" ], [ "ICLR.cc/2025/Conference/Submission1851/Reviewer_rguk" ] ], "structured_content_str": [ "{\"summary\": \"This paper studies the explainability of Vision Transformers (ViTs). The authors propose a new gradient based method, LeGrad, to produce explanation maps given pre-trained ViTs and specific inputs. LeGrad aggregates the gradient signal and combines the activations of the last and intermediate tokens to produce the merged explanation maps. Experiments show that the proposed LeGrad achieves better results in segmentation, perturbation, and open-vocabulary settings.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Unlike previous works, this paper extends the scope of ViT explainability to include new models like OpenCLIP ViTs and new settings like open vocabulary.\\n\\n2. this paper is well-written and easy to follow.\\n\\n3. comprehensive experiments. Tables 1 and 2 show that LeGrad significantly outperforms the baseline methods.\", \"weaknesses\": \"1. the technical contribution of LeGrad is not very significant. The idea of using gradient as an explanation signal and aggregating information across layers [1, 2] has been well-studied and widely applied in the literature on ViT explainability, making this work incremental.\\n\\n2. The method sums up the gradients across layers instead of performing matrix multiplication. This is different from the attention flow model in [1, 2, 3]. A detailed analysis of the motivation for summing up the signals would largely strengthen this paper.\\n\\n3. Similar to weakness 2. The proposed LeGrad method mainly focuses on the gradient itself as an explanation signal instead of the activation. However, the reason behind this design has not been clearly explained.\\n[1] Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers. ICCV 2021.\\n[2] Transformer Interpretability Beyond Attention Visualization. CVPR 2021.\", \"questions\": \"The visualization results (Figures 1, 5, 7, 11) indicate that LeGrad may produce a noisy background in the explanation maps, a problem studied in previous work [1, 2]. Is there any analysis on this issue? A possible reason is that the gradient signal is noisy in background areas and thus needs calibration. Can the vector norm methods in [1, 2] help improve the clearness in the background produced by LeGrad?\\n\\n[1] Token Transformation Matters: Towards Faithful Post-hoc Explanation for Vision Transformer. CVPR\\n2024.\\n\\n[2] Attention is Not Only a Weight: Analyzing Transformers with Vector Norms. EMNLP 2020.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces LeGrad, a new explainability method tailored for Vision Transformers (ViTs), which leverages gradients with respect to attention maps to generate relevancy maps that highlight the most influential parts of an input image for model predictions. By conducting a layer-wise analysis, LeGrad aggregates information across multiple layers, revealing the contributions of individual layers to the model's decision-making process. The method is evaluated across various tasks, including object segmentation and open-vocabulary detection, demonstrating good performance and more focused visual explanations compared to existing state-of-the-art methods like GradCAM and CheferCAM.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"LeGrad effectively utilizes a layer-wise approach to explainability, allowing it to aggregate gradients from multiple layers of the Vision Transformer. This comprehensive analysis provides a more nuanced understanding of how different layers contribute to the model's predictions, enhancing interpretability.\", \"By focusing on gradients with respect to attention maps, LeGrad captures the sensitivity of feature representations, enabling the generation of relevancy maps that highlight the most influential parts of an image.\", \"LeGrad demonstrates strong performance across various challenging tasks, such as object segmentation and open-vocabulary detection, achieving comparable results compared to other state-of-the-art explainability methods.\"], \"weaknesses\": [\"Missing references [1,2,3].\", \"The idea of the proposed method sounds very normal. It is very common and has been well studied to use gradient and attention to localize the object in Transformer-based architecture. Talking about text-image alignment, [4] also shows a more precise segmentation results than the proposed method.\", \"Figure 4 shows typo in the x-axis. The x-axis means the \\\"% of accumulated layer\\\", so it should be 10, 20, ..., 100 rather than 0.1, 0.2, ..., 1.0.\"], \"references\": \"[1] IA-RED^2: Interpretability-Aware Redundancy Reduction for Vision Transformers. \\n\\n[2] Emerging properties in self-supervised vision transformers. \\n\\n[3] Exploring Visual Explanations for Contrastive Language-Image Pre-training. \\n\\n[4] Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models\", \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This manuscript proposes LeGrad, a saliency method that visualizes the inner behavior of a vision transformer. After obtaining average over output tokens and mapping layer, LeGrad computes gradient of score with respect to attention map. The accumulation of gradients now becomes the saliency map that LeGrad provides.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Empirical results exhibit improved quality of saliency, which is validated across numerous benchmarks. I would also like to mention that detailed technical description such as how Grad-CAM can be implemented in ViT with respect to token axis may be valuable to several practitioners.\", \"weaknesses\": \"Honestly speaking, I think that this manuscript provides not much significant advancement from existing variants of Grad-CAM or existing visualization methods used in ResNets. The summation over gradient with respect to score seems a minor variant of Grad-CAM and highly similar to those existing researches. I would say the novelty of the proposed method lies in detailed engineering such as computation over the token axis and reshaping operations to be compatible with ViTs. A new idea would be something like obtaining saliency maps over multiple layers and averaging them; though it may yield improved quality, it is essentially engineering, not the scientific finding. In summary, I think that LeGrad exhibits little advancement over existing saliency methods, and the proposed method seems mere engineering, not a scientific finding. I wonder whether the LeGrad exhibits improved theoretical properties or was just validated through empirical experiments.\\n\\nAlthough the mapping model C can be easily implemented for vision-language models like CLIP, the standard ViT requires the mapping model C through additional classifier layers that are fine-tuned after appending them to ViT. In other words, the proposed method cannot be immediately applied to standard ViT and requires additional fine-tuning, which makes it difficult to be deployed immediately. Indeed, ViT with GAP is rarely used; standard ViT uses an output score obtained from the class token without average operation. This issue also raises limitations for the proposed method.\", \"minor_comments\": \"LayerNorm is omitted in Eq. 1.\", \"questions\": \"See the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper addresses the problem of explainable AI specifically designed for Vision Transformers. The authors propose using the derivative of the attention layer at each individual layer. They suggest treating each layer independently by applying a classification head that takes a function of the output from each layer and aggregates the impact across all heads and layers.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"1) The idea of dividing the explainability into a per-layer impact is sound and reasonable, potentially offering better insights into the information flow throughout the entire architecture.\\n2) The method is tested across a wide range of ViT adaptations and on classic classification as well as open-vocabulary settings, demonstrating its versatility.\\n3) The method compared in a versatile applications, especially audio to image tracking.\", \"weaknesses\": \"1) The writing level is not up to standard, and this affects the clarity and coherence of the paper. The lack of a main chain of thought makes it difficult to follow.\\n\\n1.a) Typos: influcened -> influenced (line 47), explanability -> explainability (line 49)\\n\\n1.b) English mistakes: .. the explainability *(of)* those architectures remains a challenge (line 55), \\n This facilitates its adoption across various applications and architecture*(s)* (line 67)\\n\\n1.c) Too long sentences, lack of appropriate punctuation (lines 69-72)\\n\\n2) The research gap the paper aims to address is unclear. The introduction mentions that \\\"the explainability of those architectures remains a challenge\\\" but doesn\\u2019t fully explain why. LeGrad is then introduced as differ from CheferCam, but the main advances LeGrad provides over other methods are not clarified. Why are these specific advances? why are they needed and useful?\\n\\n3) Lack of intuition behind architectural choices of the approach. For example: Why using min-max normalization? why do you average on heads and layers in Eq. 5 (it is common to average heads, but not necessarily for layers)? why do you average patches together with CLS for the mapping in line 190 (I familiar with classification ViTs working either on CLS or on average of patch tokens but not the combination of both). All of these make it difficult to follow-up works to grasp the innovative delta and extend it. Moreover, the ablation is not cover parts of the method. Therefore, this is not clear what part of the method is mostly boost the performance.\", \"questions\": \"1) Why are the results of CheferCam are not correlate with the results stated in the paper of Chefer et al.? for the negative they declare a result of 55.04 on the target (also on imagenet-val dataset) which I could'nt find in your table.\\n2) I didn't understand from the written if the C mapping is learned once only on top of the last layer and applied to all others? or you have L such mappings, one per layer? Moreover, if learned once only on ImageNet, have you used the same one on other analyzed datasets (i.e. audio)?\\n3) Explainability methods are preferable when they have some interpretability. How one can use your method in order to better understand the performance of ViT? for example - which layers/heads contributed the most? how can we use this method in order to debug misclassified samples (e.g. in case of classification).\\n\\nOverall I think that the direction of analyzing each layer separately is a good one, with a potential to improve explainability and even interpretability. Although I think that the paper is lack of intuitions behind some choices, make it difficult for follow-up works. Moreover, the writing level of the paper is somewhat low and hard to follow.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
EkfLaCJ7bk
TidalDecode: Fast and Accurate LLM Decoding with Position Persistent Sparse Attention
[ "Lijie Yang", "Zhihao Zhang", "Zhuofu Chen", "Zikun Li", "Zhihao Jia" ]
Large language models (LLMs) have driven significant advancements across diverse NLP tasks, with long-context models gaining prominence for handling extended inputs. However, the expanding key-value (KV) cache size required by Transformer architectures intensifies the memory constraints, particularly during the decoding phase, creating a significant bottleneck. Existing sparse attention mechanisms designed to address this bottleneck have two limitations: (1) they often fail to reliably identify the most relevant tokens for attention, and (2) they overlook the spatial coherence of token selection across consecutive Transformer layers, which can lead to performance degradation and substantial overhead in token selection. This paper introduces TidalDecode, a simple yet effective algorithm and system for fast and accurate LLM decoding through position persistent sparse attention. TidalDecode leverages the spatial coherence of tokens selected by existing sparse attention methods and introduces a few token selection layers that perform full attention to identify the tokens with the highest attention scores, while all other layers perform sparse attention with the pre-selected tokens. This design enables TidalDecode to substantially reduce the overhead of token selection for sparse attention without sacrificing the quality of the generated results. Evaluation on a diverse set of LLMs and tasks shows that TidalDecode closely matches the generative performance of full attention methods while reducing the LLM decoding latency by up to $2.1\times$.
[ "efficient transformer serving", "sparse attention decoding" ]
Accept (Poster)
https://openreview.net/pdf?id=EkfLaCJ7bk
https://openreview.net/forum?id=EkfLaCJ7bk
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xZavzAXMED", "vAHiWGBacU", "lm650I7YRb", "kjP7yK5DXa", "hhj8XryNHN", "d1YAMM2qPI", "bcfiHQqNWQ", "PfkMgV3mlE", "NeCjcdgSxx", "MzCmDlX3L6", "M7eedLlFlN", "L4VTxAkc4f", "KDpUhQ3MWp", "FYAdI561uO", "F3ik6hnyuT", "E2shdvdIr9", "2UdXX4hF6H" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "decision", "official_comment" ], "note_created": [ 1730592897933, 1732179757664, 1732112940430, 1732113659248, 1730356920174, 1732200042260, 1732199154640, 1732114858371, 1732198892529, 1732113129344, 1732504172730, 1732114074111, 1730473011410, 1734601502916, 1730688961220, 1737523891075, 1732508278746 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8154/Reviewer_9wNX" ], [ "ICLR.cc/2025/Conference/Submission8154/Reviewer_oNeu" ], [ "ICLR.cc/2025/Conference/Submission8154/Authors" ], [ "ICLR.cc/2025/Conference/Submission8154/Authors" ], [ "ICLR.cc/2025/Conference/Submission8154/Reviewer_mweX" ], [ "ICLR.cc/2025/Conference/Submission8154/Authors" ], [ "ICLR.cc/2025/Conference/Submission8154/Reviewer_oNeu" ], [ "ICLR.cc/2025/Conference/Submission8154/Authors" ], [ "ICLR.cc/2025/Conference/Submission8154/Authors" ], [ "ICLR.cc/2025/Conference/Submission8154/Authors" ], [ "ICLR.cc/2025/Conference/Submission8154/Reviewer_mweX" ], [ "ICLR.cc/2025/Conference/Submission8154/Authors" ], [ "ICLR.cc/2025/Conference/Submission8154/Reviewer_oUXc" ], [ "ICLR.cc/2025/Conference/Submission8154/Area_Chair_121q" ], [ "ICLR.cc/2025/Conference/Submission8154/Reviewer_oNeu" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8154/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduce a decoding method for sparse attention with pre-selected tokens. The method essentially performs full attention at lower layers and sparse selective attention at the upper layers to obtain the selected tokens, and then full-attention on reduced sets of tokens in the highest layers. The paper presents experiments to show that TidalDecode is competitive to full attention decoding while reducing latency.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper presents an optimization strategy to improve speed and latency by recognizing that the selected tokens in sparse attention remain consistent across layers, so that the selection process may only need to happen in lower layers.\", \"The paper presents evaluation to show the method is competitive to some baselines on latency evaluation\"], \"weaknesses\": [\"The writing and presentation of the paper significantly need improvement. There are too much unnecessary underlined texts.\", \"The motivation and introduction are written unclearly, with too much irrelevant introductional texts.\", \"The preliminary and methodology are written in confusing ways. There is a need to put clearer definition and better writing.\", \"Generally, I would lean towards acceptance if the paper is revised thoroughly.\"], \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"So basically you do not use GQA in tidal attention, right?\"}", "{\"title\": \"Respond to reviewer oNeu\", \"comment\": \"We thank the reviewer for their review and address their main concerns below.\\n\\n`Q1. dataset selection in the experiments.`\\n\\nWe agree with the reviewer that selecting a dataset with a longer generation length is important to distinguish between the eviction-based and selection-based methods. To this end, we further provide experimental results on the summarization task GovReport and code completion tasks RepoBench (RB) and LCC in Table 5 below and the revised paper. TidalDecode achieves the highest score on the added summarization task GovReport, where the model generates 512 tokens for each question. For code completion tasks, when selecting top-4096 tokens, TidalDecode outperforms full attention and Quest on RepoBench and stays close to full attention on LCC. We also include Table 4 below and in the revised paper to show the generation length for each task in LongBench.\", \"table_4\": \"| | MFQA | NrtvQA | Qasp | 2Wiki | HotQA | GovRep | QSum | TrivQA | PRe | RB | LCC | Avg |\\n|--------|------|--------|------|-------|-------|--------|------|--------|-----|----|-----|-----|\\n| Length | 64 | 128 | 128 | 32 | 32 | 512 | 512 | 32 | 32 | 64 | 64 | 145 |\", \"table_5\": \"| | GovRep | RB | LCC | Avg |\\n|-----------------|-----------|-----------|-----------|-----------|\\n| Full | 16.77 | 43.97 | 44.88 | 33.11 |\\n| Quest (K=1024) | 15.89 | **49.43** | **49.66** | **31.61** |\\n| TD+L13 (K=1024) | **16.41** | 43.29 | 40.67 | 31.49 |\\n| Quest (K=4096) | 16.71 | 43.87 | 43.84 | 32.13 |\\n| TD+L13 (K=4096) | **16.78** | **44.54** | **44.27** | **33.50** |\\n\\nAdditionally, some other evidence that shows the improvement brought by TidalDecode: 1. In the original and extended LongBench results (Table 3 and Table 5), TidalDecode can both achieve a better score on average compared to the full-attention version across all eight or eleven tasks, where the average generation length is 145 tokens for a test. 2. In our sensitivity analysis for the choice of the token reselection layer (Table 9-14), choosing different token reselection layers can introduce a huge model performance degradation (e.g., from 100% to 0%), which indicates that the generation quality in the needle-in-the-haystack tasks also highly depends on the algorithm we used in the decoding stage. \\n\\n`Q2. I/O for group query attention.`\\n\\nAs state-of-the-art kernel implementation for the Group Query Attention (GQA) only loads the keys and values from KVCache once within a group for all the query heads in the same group, if we choose to use a token budget of $k$, then the IO reduction ratio for KV loading would be: $\\\\frac{\\\\min{(\\\\text{seqlen}, \\\\bigcup_{i \\\\in \\\\text{q-head-group}}B_i)}}{\\\\text{seqlen}}$ where $B_i$ is the top-k token indices selected by query head $i$ within the group. So, the reduction rate range spans from $\\\\frac{k}{\\\\text{seqlen}}$ to $\\\\min{(1, \\\\frac{\\\\text{GroupSize}*k}{\\\\text{seqlen}})}$. In practice, using a $k$ satisfying $\\\\text{GroupSize}*k \\\\ll \\\\text{seqlen}$ can already achieve a pretty good performance, and the query head will usually share many top-k indices to reduce the I/O further. \\n\\nFor the dot product results that we saved, the I/O is negligible compared to the I/O for the KVs as the dot product is of shape (batch_size, seq_len, q_head_num) while the KVs are 2*(batch_size, seq_len, head_dim*kv_head_num). In the future, we plan to remove the materialization of the dot product by fusing the top-k operation with the attention kernel.\\n\\n`Q3. Presentation and references.`\\n\\nWe thank the reviewer for pointing out the issues in our presentation. We have added all the citations for the baselines in the caption of Table 1 and a necessary description of Quest in section 4.2.1. All the modifications are highlighted in blue.\"}", "{\"title\": \"Respond to reviewer oUXc\", \"comment\": \"We thank the reviewer for their review and address their main concerns below.\\n\\n`Q1/W1. Adaptive layer selection strategy`\\n\\nWe agree with the reviewer that the model's performance is highly sensitive to the choice of the token re-selection layer. However, as you have suggested, our observations show that the optimal token re-selection layer choice is consistent across different task and decoding steps for models within the same model family. We hypothesize that this is an intrinsic property resulting from specific models' pretraining process. This phenomenon is an interesting observation but lacks theoretical analysis, which we leave as a future work. To sum up, we don\\u2019t have an adaptive layer selection strategy at the current stage since the optimal choice is consistent within the same model family. We can thus directly identify the optimal choice from sample data only once and use it for all later deployments. If the optimal layer choice within a model family does vary for different downstream tasks or even different decoding steps, there is certainly a need to design a flexible, adaptive layer selection strategy. \\n\\n`W2. Figure 2`\\n\\nWe thank the reviewer for pointing out the confusion in our presentation for Figure 2. We have updated our descriptions and definitions for terms (e.g., recall rate) in the text and the caption, which have been highlighted in blue. More specifically, Figure 2a shows the overlap ratio of the top-k tokens selected between different layers, which is used to calculate the recall rate in Figure 2b. The recall rate is defined as the averaged overlap ratio in Figure 2a over the sparse attention layers between the third layer and the corresponding token re-selection layer, and over all the sparse layers after the chosen re-selection layer. For instance, the recall rate for choosing layer 13 as the token re-selection layer is the average of the overlap ratio over the two red bars in Figure 2a. Then, we can clearly see that adding a token re-selection layer around layer 13 can significantly boost the recall rate due to the improvement in the overlap ratio from the purple bar under layer 3 to the red bar under layer 13 in Figure 2a. Figure 2b quantitatively captures the improvement. We can observe that selecting layer 13 as the token re-selection layer is optimal if we only allow one additional token re-selection layer.\", \"the_reason_why_the_overlap_is_not_that_significant_is_two_folded\": \"1. As the context length that we used is 100K and the token budget is 256, an overlap ratio of 40% is relatively significant compared to the sparsity ratio of 1/400. However, the heatmap we used for Figure 2a allows a maximum value of 1, making 0.4 less obvious. 2. Besides the token selection overlap ratio, we have added additional evidence that supports our claim in Figure 8 in the appendix. The figure captures the cosine similarity between adjacent layers\\u2019 attention scores, and we have provided our justifications along with the figure. The reason why we use cosine similarity is that the token overlap ratio only focuses on the number of shared tokens rather than the contribution of each token in terms of the attention score. For instance, two layers can have only a few tokens that are shared out of the 256 tokens we chose, but these several tokens can contribute a huge portion of the attention scores, which can be captured by the cosine similarity metrics we used in Figure 8. The resulting attention score similarity (correlation) for adjacent layers in Figure 8 is much higher than the token overlap ratio in Figure 2a, which provides further evidence for our design.\\n\\n`Q2. KV Cache Correction`\\n\\nFor most of the tasks in our evaluations, the generation length is insufficient to see a degradation in model quality. On the other hand, to make it a fair comparison against other baseline models that haven\\u2019t applied cache correction, we don\\u2019t use the KV cache correction feature for all our current experiments. However, to provide some results on KV cache correction, we provide additional experimental results for the LongBench test with KV cache correction in Figure 9 in the appendix. Albeit the improvement is not significant due to a short generation length, the consistent improvement brought by cache correction indicates that cache correction effectively mitigates the adverse effects of polluted KV representations, which stem from sparse attention mechanisms and can accumulate errors, potentially reducing model performance. \\n\\n`Q3. Better performance than full attention baseline`\\n\\nBesides potential variances, we hypothesize that the rationale behind this phenomenon is that the sparse attention mechanism can focus more on the actual important tokens while removing noisy ones that are irrelevant to the question. We also identify it as a pretty interesting observation but lacks comprehensive analysis, which we leave as a future work.\"}", "{\"summary\": \"The paper presents TidalDecode to enhance the efficiency and accuracy of large language models (LLMs) through position persistent sparse attention. TidalDecode introduces token selection layers to identify and retain the most relevant tokens, with other layers using sparse attention to manage memory and reduce computational demands. Empirical evaluations indicate that TidalDecode can achieve up to a 2.1\\u00d7 reduction in latency while maintaining comparable generative performance to full attention mechanisms.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The proposed method is effective, maintaining the advantages of both eviction and selection-based sparse attention methods according to experiments.\\nThe paper presents robust evaluations across various LLMs and tasks, including LongBench and Needle-in-the-Haystack tests. Results indicate that TidalDecode consistently performs well compared to state-of-the-art methods.\", \"weaknesses\": \"1. Definition of `spatial coherence`. The spatial coherence seems just come from nowhere. I can mostly understand it refers to the consecutive layers have significant attention overlaps. But there is not a `scientific definition` in this paper.\\n2. Complexity of Implementation: TidalDecode's reliance on custom GPU kernels and specific layer configurations could limit its accessibility and adoption. Please provide pseudo code or GPU consumption data, it could make the results more clear.\\n3. The proposed method can be considered as a kind of hierarchical attention, more related baselines can be included. such as [1,2]\\n4. KV-cache correction seems to be crucial, no ablation study (or I just missed). Please provide the results without KV-cache correction.\\n5. training details, e.g, learning rate, training tokens, GPU hours, etc. It would help the reproduction.\\n6. Does `L` matter in different models? How to determine the optimal hyper-parameters. It would be appreciated if automatic prediction of `L` or a guideline for `L` selection across different model size, model architecture, etc.\\n7. There might be some conflicts in the packages used in latex.\\n\\n[1] Chalkidis, Ilias, et al. \\\"An exploration of hierarchical attention transformers for efficient long document classification.\\\" arXiv preprint arXiv:2210.05529 (2022).\\n[2] Yang, Zichao, et al. \\\"Hierarchical attention networks for document classification.\\\" Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies. 2016.\", \"questions\": \"Please see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Respond to reviewer oNeu\", \"comment\": \"For the full attention baseline in the efficiency evaluation section (section 4.3), we are using the state-of-the-art FlashInfer kernel implementation which leverages the flash attention decoding with kvcache. Our kernel implementation also uses the flash attention technique where the attention score is never fully materialized, so that's why our current implementation is storing the dot product between query and keys rather than the attention score. However, to further optimize, we are planning to fuse the argtop-k operator with the attention kernel so there is no need to store the dot product anymore (in a way that each cuda block have a local top-k followed by a reduction over all blocks to gather the final top-k).\"}", "{\"comment\": \"How is your kernel (with attn score) compared with flash_attn_with_kvcache?\"}", "{\"title\": \"General respond\", \"comment\": \"We thank the reviewers for all their constructive comments. Based on the comments (please refer to our individual replies for details), we have prepared a revised submission, where we focus on clarifying the most important confusions we identified and providing additional results as much as possible. All revised texts are in blue. Most of the additional experimental results and figures are included in Appendix A.1-3.\", \"summary\": \"1. Added Figure 8 to the appendix, which depicts the cosine similarity of the attention scores between adjacent layers to further support our motivation to use position persistent sparse attention. \\n2. Added the generation length in Table 4 and additional results on LongBench in Table 5 in the appendix to demonstrate the effectiveness of our method on tasks with a longer generation length.\\n3. Added Figure 9 in the appendix to show the effects of KV cache correction.\\n4. Removed redundant underlines to improve the presentation of the paper.\\n5. Rewrote the introduction section and the methodology section to make the motivation as well as the description of the methodology clearer to the audience. \\n6. Added citations to baseline methods and a brief description of Quest.\\n7. Rewrote the description for Figure 2 to clarify certain terms (recall rate) to resolve potential confusion.\"}", "{\"title\": \"Respond to reviewer oNeu\", \"comment\": \"Most of our evaluation results are on LLaMA-3/3.1/3.2, which uses GQA, so our method can support GQA-based models. However, within the same KV group, instead of using a shared set of top-k indices, tidal attention allows each query head to calculate their top-k indices independently for better performance. But we want to note that we only need to fetch KVs once for overlapping indices among different query heads as they share the same KV head. More specifically, if we use $B_i$ to denote the top-k indices calculated by each query head, we only need to fetch $\\\\bigcup_i{B_i}$ in the sparse layers. We thank the reviewer for the comment and would be happy to address any additional concerns from the reviewer, if any.\"}", "{\"title\": \"Respond to reviewer 9wNX\", \"comment\": \"We thank the reviewer for their review and address their main concerns below.\\n\\n`Q1. Writing - underlines`\\n\\nWe thank the reviewer for pointing out the issues in our presentation. In our revised paper, we removed most of the phrases emphasized with underlines and only kept essential ones when they first appeared. \\n\\n`Q2. Writing - motivation and introduction`\\n\\nWe have removed some unnecessary descriptions in the introduction section and rewritten some paragraphs (in blue) to make it more concise while preserving essential information. The introduction section now follows an order of 1. LLM and long context generation background 2. The bottleneck of long context LLM decoding lies in KV cache memory access (motivation) 3. A brief overview of existing methods 4. A brief introduction of our method (TidalDecode) 5. Evaluation results and summary of contributions.\\n\\n`Writing - preliminary and methodology`\\n\\nWe have rewritten some descriptions in the methodology section that might cause confusion. Additionally, we have added the necessary explanations and definitions for certain figures and terms, like recall rate. All the modifications are highlighted in blue. \\n\\nWe hope we have addressed most of the writing concerns and misunderstandings raised by the reviewer. Please let us know if there is anything else we can further clarify.\"}", "{\"comment\": \"Thanks for the responses.\\n1. I'm confused by the response to `W4`. It doesn't make any sense to me that the authors introduced some methods, but didn't use them in the experiments. TBH the augments cannot convince me. It's just a `useless` method that would make the paper more \\\"scientific\\\" to me if the authors express such arguments: `Even though we have introduced the mechanism of KV cache correction, we don't use the feature for all our current experiments to make it a fair comparison against other baseline models that haven\\u2019t applied cache correction.`\\n2. Response to W6. As a summarization, the authors cannot provide any selection strategy, this will greatly reduce the real world application of the proposed method, as we can only try every L for optimal performance. Even on a small dev test, it would be very unreliable and time-consuming.\\n\\nAs a result, the author's responses did not address my concerns, I would like to keep the score unchanged.\"}", "{\"title\": \"Respond to reviewer mweX\", \"comment\": \"We thank the reviewer for their review and address their main concerns below.\\n\\n`W1. Spatial Coherence`\\n\\nWe thank the reviewer for pointing out our use of spatial coherence and the need for a definition. We have modified accordingly in the fourth paragraph of the introduction section with a definition of spatial coherence that is highlighted in the underlines. \\n\\n`W2. Complexity of Implementation`\\n\\nFor the GPU kernel implementation, we are building on top of the state-of-the-art attention kernel implementation library FlashInfer, which has been used in vLLM, SGLang, MLC-LLM, etc. Thus, we believe that our implementation can be easily integrated into popular LLM serving systems. In addition, our kernel implementation code has been anonymized and included in the supplementary materials for your reference. \\n\\n`W3. Hierarchical Attention Networks`\\n\\nWe thank the reviewer for suggesting potential related works. We want to clarify that TidalDecode is a training-free method that can be directly applied to existing pretrained models like LLaMA-2 and LLaMA-3 during the inference phase, which is different from the hierarchical attention methods [1, 2] that requires pre-training. \\n\\n`W4. KV-cache correction`\\n\\nEven though we have introduced the mechanism of KV cache correction, we don't use the feature for all our current experiments to make it a fair comparison against other baseline models that haven\\u2019t applied cache correction. Additionally, for most of the tasks in our evaluations, the generation length is insufficient to see a degradation in model quality, so the model performance is still maintained even without KV cache correction. However, to show the effects of KV cache correction, we provide additional experimental results for the LongBench evaluation with KV cache correction on the original eight tasks and include results in Figure 9 in the revised paper. Albeit the improvement is not significant due to a short generation length, cache correction with a stride of 4 improves performance in five out of eight tasks, and a stride of 8 yields improvements in six out of eight tasks. These results highlight the efficacy of cache correction in mitigating the adverse effects of polluted KV representations, which stem from sparse attention mechanisms and can accumulate errors, ultimately degrading model performance. \\n\\n`W5. Training details`\\n\\nPlease refer to the response to W3, as our method is a training-free method.\\n\\n`W6. How to choose the optimal token re-selection layer (L)`\\n\\nOur observations show that the optimal token re-selection layer choice is consistent across different task and decoding steps for models within the same model family. We hypothesize that this is an intrinsic property resulting from specific models' pretraining process. This phenomenon is an interesting observation but lacks theoretical analysis, which we leave as a future work. To sum up, we don\\u2019t have an automatic layer selection strategy at the current stage since the optimal choice is consistent within the same model family. We can thus directly identify the optimal choice from sample data only once and use it as a guideline for all later deployments. If the optimal layer choice within a model family does vary for different downstream tasks or even different decoding steps, there is certainly a need to design a flexible, adaptive layer selection strategy. \\n\\n`W7. Conflicts in the packages used in latex.`\\n\\nWe have tried our best to resolve all the warnings in the compilation for the revised paper. Please let us know if the package conflicts still persist. \\n\\n**References:**\\n\\n[1] Chalkidis, Ilias, et al. \\\"An exploration of hierarchical attention transformers for efficient long document classification.\\\" arXiv preprint arXiv:2210.05529 (2022). \\n\\n[2] Yang, Zichao, et al. \\\"Hierarchical attention networks for document classification.\\\" Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies. 2016.\"}", "{\"summary\": \"This article introduces a new sparse attention mechanism, TidalDecode, which is an efficient selection-based attention computing mechanism for the second stage of LLM inference, specifically the decoding stage. The authors discovered through empirical experiments that there is a significant overlap among the Top-K tokens in the attention scores between consecutive Transformer layers. Thus, they propose performing full attention computation only in the initial and intermediate layers, while the other layers select the most relevant KV-cache to compute attention, achieving an efficient sparse attention mechanism. Experimental results across diverse LLMs and tasks demonstrate significant improvements in both decoding latency and the quality of generated results. The optimal choice of the token re-selection layer is consistent across different tasks if the model belongs to the same model family.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.The article is written in a clear and comprehensible way, with strong consistency across empirical experiments, motivation, methodology, and the effectiveness of the proposed method.\\n2.The method is simple yet effective, supported by extensive experimental and theoretical validations. A wide range of diverse experiments, including multiple LLMs and tasks, demonstrates the excellent performance of the proposed method.\", \"weaknesses\": \"1.The chosen token across different tasks for the same model family is almost consistent; however, the dramatic variation in performance between L12 and L13 in Table 7 presents a significant challenge for the method's generalization across different models and datasets. This variability could lead to catastrophic results when generalizing to other datasets.\\n2.Regarding Figure 2a, the overlap results do not seem significant based solely on the visual representation. Please provide a detailed explanation of the information that can be interpreted from the figure.\", \"questions\": \"1.Building on Weakness 1, an adaptive layer selection strategy seems necessary. Did the authors explore any strategies in this regard?\\n2.I am quite curious about the actual application effects of KV Cache Correction. Were corresponding experimental validations conducted?\\n3.In Table 3, it is interesting to note that on certain datasets like HotQA, the sparse attention mechanism sometimes outperforms the full attention mechanism. Is there a relevant explanation for this, or an empirical rationale behind it?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes a novel sparse attention mechanism, TidalDecode, to enhance the efficiency and accuracy of large language models (LLMs) during the decoding stage of inference. This mechanism leverages a layer-wise selection approach to optimize attention computation. By reducing the amount of KV cache requiring input/output, TidalDecode significantly decreases latency. Empirical evaluations demonstrate that TidalDecode achieves up to a 2.1\\u00d7 reduction in latency while maintaining comparable generative performance to full attention mechanisms.\\n\\nThe paper is clear and easy to follow. Sufficient experiments and analyses make the results very convincing. In the rebuttal stage, the authors actively interact with reviewers to answer their questions. I am happy to recommend acceptance of the paper. However, please also consider the negative comments when revising the paper.\", \"additional_comments_on_reviewer_discussion\": \"As demonstrated in the 'General Respond' in official comment, the authors have actively refined the paper to address and alleviate the reviewers' concerns.\"}", "{\"summary\": \"This paper introduces TidalAttention, which uses a layer-wise selection approach to choose the top-k keys and values needed in top-k attention, thereby reducing the amount of KV cache requiring I/O.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The advantages of this paper are as follows:\\n\\n1) The logic is very clear, and the writing is well-structured. The figures and captions are precise, making it easy for readers to quickly understand.\\n\\n\\n2) The focus of the paper is excellent, and I completely agree with it. Particularly with group query attention, KV cache storage is no longer a major issue; instead, the I/O for the KV cache in attention deserves attention. And I really like the mechanism cache correction.\\n\\n\\n3) Most of the experiments are solid, although there is room for improvement. And the cuda kernel is good.\", \"weaknesses\": \"I think the paper has the following limitations or raises some questions for me:\\n\\n1) I believe there may be an issue with the dataset selection in the experiments. Since the authors\\u2019 method retains the full KV cache as opposed to KV cache eviction, it would be best to evaluate on datasets with longer outputs. If the dataset only outputs one token, as in NIAH, there should be no difference between eviction-based and selection-based sparse methods, which doesn't highlight the importance of maintaining the full KV cache. Additionally, some QA datasets in LongBench also have this issue. I recommend that the authors supplement more summarization datasets from LongBench (such as gov_report) and code completion datasets (like LCC and repo-bench).\\n\\n\\n2) The authors claim that their kernel stores dot product instead of attn score, so I just wondering how this method works for group query attention. How can the authors decrease the IO of gqa if the queries in the same group with different top-k?\\n\\n\\n3) The authors\\u2019 description of Quest is not very clear, and it would be beneficial to add citations for previous baselines in the tables.\", \"questions\": \"see cons.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"We thank the reviewer for the response and would like to try our best to address your concerns.\\n\\n1. We agree with the reviewer that having ablation results with KV cache correction can make our evaluation more comprehensive. Therefore, we have included ablation studies on KV cache correction in Figure 9 in the appendix. We can see slight improvement over the version without KV cache correction. More importantly, from all other evaluations, we can observe that our core method (position persistent sparse attention) can consistently achieve the best performance while bringing a significant speed-up ratio due to its simplicity, even without KV cache correction.\\n\\n2. We agree with the reviewer that an automatic selection method is required if the optimal choice of token re-selection layer is sensitive to different downstream tasks. However, our evaluation results demonstrate that the optimal token re-selection layer is really consistent across all downstream tasks and even all the models within the same model family. For instance, the optimal token re-selection layer is the 13th layer for LLaMA-3/3.1/3.2-8B in all downstream tasks we evaluated. Therefore, we only need to decide the optimal token re-selection layer once, and the selected layer can be used for all later deployments over different tasks. Additionally, as the model performance is really sensitive to different choices of the optimal token re-selection layer (as shown in Table 9-14), we can identify the optimal token re-selection layer with minimal effort through a needle-in-the-haystack task over a small set (100) of synthetic examples. Additionally, in Figures 2(a) and 2(b), we present our observation, which can serve as a method to effectively narrow down the search space for selecting reselection layers. This is achieved by calculating the average overlap ratio when different layers are chosen as the reselection layer in a simple Needle-in-the-Haystack setting. This approach further reduces the effort required to identify the optimal reselection layer. \\n\\nWe hope our responses above could address most of your concerns, and please feel free to leave more questions if you have. We greatly appreciate your feedback, which deeply contributes to a better revision of our previous draft.\"}" ] }
Ek50sQQI1w
A Novel Listwise Alignment Approach for Language Models with Explicit Rewards
[ "Ellen Yi-Ge", "Mige Zhu", "Jason Gao", "Liz Li", "Jiajun Tian" ]
Existing alignment techniques, including Direct Preference Optimization (DPO), are primarily designed for pairwise preference data where rewards are inferred rather than explicitly provided. In this paper, we propose a comprehensive framework for aligning large language models (LLMs) by introducing a new optimization objective that facilitates the processing of reward datasets, which consist of a list of responses explicitly marked with scalar preference scores. Our contribution includes the development of a novel algorithm, termed Soft Preference Optimization (LPO), which allows for the direct derivation of an LLM policy from both reward and preference datasets. At the heart of LPO is a unique listwise preference optimization objective formulated using an exponential-logarithmic function and an adaptive loss coefficient, which effectively integrates listwise preference signals into the LLM. We assess the efficacy of our approach under both reward and preference scenarios using different sizes of Mistral models. Experimental results indicate that our method outperforms several preference-based benchmarks, particularly when reward datasets are utilized. Additionally, our method demonstrates a significant advantage over DPO in intricate reasoning tasks, such as mathematical problem-solving and coding.
[ "large language models", "preference alignment", "listwise optimization objective" ]
https://openreview.net/pdf?id=Ek50sQQI1w
https://openreview.net/forum?id=Ek50sQQI1w
ICLR.cc/2025/Conference
2025
{ "note_id": [ "kfdTsArCtj", "dzzSxqhJUK", "cw75frHmAp", "b4jkYEJ0B7", "Nojf6pukqz", "HMbWTmQgmi" ], "note_type": [ "official_review", "comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730719608220, 1731488595363, 1731486328913, 1730543455864, 1730373995153, 1730409853827 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10079/Reviewer_92iv" ], [ "ICLR.cc/2025/Conference/Submission10079/Authors" ], [ "~Daniil_Gavrilov1" ], [ "ICLR.cc/2025/Conference/Submission10079/Reviewer_Sy9u" ], [ "ICLR.cc/2025/Conference/Submission10079/Reviewer_G9Sb" ], [ "ICLR.cc/2025/Conference/Submission10079/Reviewer_PGxV" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces Listwise Preference Optimization (LPO) which processes listwise preference information, leveraging a novel optimization objective tailored for multiple responses per prompt. This listwise approach supports both reward and preference datasets, incorporating DPO as a special case. The paper also introduces LPO-abs, a variant designed to counteract the issue that response likelihoods decrease over training. Experimental results across challenging tasks, including mathematical reasoning and coding, demonstrate that LPO and LPO-abs outperform baseline models like DPO and KTO.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"It proposes an innovative method modified from DPO for listwise alignment, which is novel.\\n\\nThe evaluation datasets are comprehensive and diverse.\\n\\nThe proposed LPO-abs effectively prevent the likelihood of preferred responses from decreasing, which solve an important issue of DPO.\", \"weaknesses\": \"While LPO and LPO-abs are evaluated across several benchmarks, they are only experimented on the mistrial language model.\\n\\nThe baseline methods are not comprehensive enough. \\n\\nAlso, the improvement seems to be slight right now.\", \"questions\": \"Please refer to the weaknesses.\\n1. To validate the robustness of the alignment method, other models like Llama may also be tested.\\n2. The baseline methods may include others like IPO, SimPO, etc to fully validate the performance. \\n3. It would be better to compare the method with some other reward-based alignment method that can work on the reward datasets\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Double submission\", \"comment\": \"This paper is a duplicate of the submission at https://openreview.net/forum?id=28TLorTMnP (see Table 2, 3, Figure 1, 2). The version with the lower scores was withdrawn.\"}", "{\"summary\": \"This paper presents LPO (Listwise Preference Optimization), a method for training language models using reward datasets. The authors propose LPO and its variant LPO-abs, which they claim can optimize language models using reward data and address the data likelihood decline problem observed in DPO. The authors describe their methods as being capable of working with both reward data and preference data in a unified framework. Through experimental evaluation, the authors report that LPO outperforms existing preference-based methods when tested on both reward and preference datasets, which they attribute to better utilization of information from reward datasets. The work aims to contribute to language model training by offering a framework that integrates both reward and preference-based optimization approaches.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper has a clear structure in presenting the methodology. The experimental design follows a good practice, with research questions outlined before results. This helps readers follow the technical contributions despite the complexity of the subject matter.\\n\\n2. This paper raises an interesting direction to utilize the explicit reward data when preference optimization. The authors consider both reward-based and preference-based scenarios in their experiments even though their method is mainly about reward-based scerario. This is necessary since in general, the explicit reward dataset is not well defined and often hard to get, which makes the preference datasets still the dominant real applicational case. \\n\\n3. The evaluation covers a good range of benchmarks across different domains (BBH, HumanEval, LeetCode, various math tasks). This testing across diverse tasks helps understand how those methods compares in different domains.\", \"weaknesses\": \"1. The reward-based experiments in Section 5.1 raise methodological concerns regarding evaluation bias. Specifically, the reported improvements on MT-bench warrant careful interpretation given that the model's training utilizes GPT-4 rewards (0-10 scale) and is subsequently evaluated using the same GPT-4 scoring system. This circular evaluation framework makes it challenging to disentangle genuine methodological improvements from potential artifacts introduced by the training data. A more robust evaluation would benefit from independent metrics and human evaluation to validate the reported gains.\\n\\n2. The AlpacaEval results are actually comparable to baselines, with some labeling inconsistencies in reporting the second-best results; Moreover, the absence of win rate (WR) reporting against DPO for some baseline,(e.g. KTO, the best baseline in AlpacaEval metric), limits our ability to fully assess the method's comparative advantages.\\n\\n3. Although the explicit reward data setting is worth exploration, the technical novelty of the method is limited from my perspective. The core idea essentially multiplies the DPO objective by the absolute difference of reward values between winning and losing samples, then adds a term to keep the learned reward generally high. This modification is relatively straightforward and the empirical results don't demonstrate significant improvements to justify these design choices.\\n\\n4. The loss function design appears largely heuristic. While drawing inspiration from LambdaRank, there's insufficient theoretical justification for why this particular formulation is optimal. Given the lack of strong theoretical grounding, the empirical results would need to be more convincing to justify the design choices.\", \"questions\": \"1. Using explicit reward data for LLM alignment is an interesting direction that could potentially offer more direct supervision than pairwise preferences. However, this paradigm raises several questions that deserve deeper discussion: How should we define \\\"good\\\" explicit rewards for language model alignment? While this paper uses GPT-4 scores, we should consider more broadly: what are principled ways to collect such reward data? Different sources (human annotations, model scores, automated metrics) might introduce different biases - for example, using GPT-4 scores might bias the model toward GPT-4's behavior rather than true human preferences. A deeper discussion of these considerations would help establish best practices for using explicit rewards in alignment.\\n\\n2. The paper introduces a method to avoid the reward decreasing of all responses $r\\\\theta(x, y_i)$ in LPO-abs, but the theoretical foundation could be further developed. Since standard reinforcement learning theory suggests that policies remain unchanged when rewards are shifted by a constant, it would be valuable to better understand why maintaining absolute reward values is beneficial in this context. If so, then why is it necessary a bad thing that chosen response\\u2019s reward decreases as long as the difference between wining and losing samples is learne? Additional theoretical analysis exploring this aspect would make the motivation clearer and could reveal interesting insights about language model training dynamics.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper describes a way to to policy optimization when mutliple generations for a single prompt are available together with their true reward scores. They authors then proposed to use a weighted DPO objective based on the difference of the given reward scores. They in addition also propose a modification to enhance the likelihood decay. They show on experiments that their proposed method does somewhat better that existing and naive DPO.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The paper explores a different preference/reward model dataset where rewards are given and used to weigh the objective\", \"The paper is well-written and easy to follow\", \"The paper is straightforward and easy to implement if data is available.\"], \"weaknesses\": [\"The paper's contribution is minimal as they merely add a weight based on the reward difference. The motivation is unclear and seems to be the most obvious way to include rewards when they are present. The idea of weighting the DPO loss is not novel and simply using the difference of rewards, in my opinion, does not warrant a paper unless the results are exceptional. [1] Please let me know if i have missed something.\", \"The objective LPO-abs doesn't seem to make sense to me. How does increasing the reward on every sample help the model at all? Should this be the difference between the true reward and the predicted reward? i.e., eq (9). Please explain the regularization term to me again, as I might have misunderstood.\", \"The paper also crucially fails to compare against RPO i.e. SFT+DPO [2] Which also solves the likelihood problem raised by the authors. However, no comparisons were made and I suspect that it will do similarly if not almost exactly the same as LPO-abs in the pairwise case. Please let me know why this wasn't added, happy to be proven wrong.\", \"I am also extremely surprised that there is no hyperparameter for the regularization term in LPO-abs. Is this a feature of this method? if yes please explain the intuition behind this property.\", \"[1] WPO: Enhancing RLHF with Weighted Preference Optimization\", \"[2] Provably mitigating overoptimization in rlhf: Your sft loss is implicitly an adversarial regularizer\"], \"questions\": \"I have raised my concerns above please address them.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes LPO for pointwise reward datasets, which generalizes DPO by assigning weights to different pairs of responses. It also proposes to fix the data likelihood decline issue with an extra cross entropy term. The experiments show improved performance across tasks and datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"It studies pointwise reward data setting for offline alignment methods. This is an important problem and seems under-explored.\", \"it studies the data likelihood decline issue, which is known to cause problems.\", \"It conducts extensive experiments.\"], \"weaknesses\": [\"There seems to be a confusing typo --- in the abstract the method is called Soft Preference Optimization?\", \"I am not completely convinced by the design of the LPO loss. While it is a sensible generalization of the DPO loss (weighting pairs with bigger reward differences), it's not clear if this is the most principled choice. The derivation of DPO itself, relies on Bradley Terry assumption on the preference data. I feel a better derivation of the loss might come from statistical assumptions on how to model the pointwise data: e.g. does bigger pointwise reward difference mean bigger preference difference, or bigger certainty in the preference? The current formulation seems to use pointwise reward difference as a proxy for uncertainty (more certain pairs gets more weights). But is it a good choice? There are certainly many alternatives that the authors can study/compare against.\", \"About data likelihood decline issue: other papers (e.g. [1]) use the cross entropy of the preferred samples as the extra term, and seem to get rid of the issue. The authors can compare against that.\", \"[1] The Llama 3 Herd of Models https://arxiv.org/abs/2407.21783\"], \"questions\": \"I wrote my questions in the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
EjJGND0m1x
MIND over Body: Adaptive Thinking using Dynamic Computation
[ "Mrinal Mathur", "Barak A. Pearlmutter", "Sergey M. Plis" ]
While the human brain efficiently handles various computations with a limited number of neurons, traditional deep learning networks require a significant increase in parameters to improve performance. Yet, these parameters are used inefficiently as the networks employ the same amount of computation for inputs of the same size, regardless of the input's complexity. We address this inefficiency by introducing self-introspection capabilities to the network, enabling it to adjust the number of used parameters based on the internal representation of the task and adapt the computation time based on the task complexity. This enables the network to adaptively reuse parameters across tasks, dynamically adjusting the computational effort to match the complexity of the input. We demonstrate the effectiveness of this method on language modeling and computer vision tasks. Notably, our model achieves 96.62\% accuracy on ImageNet with just a three-layer network, surpassing much larger ResNet-50 and EfficientNet. When applied to a transformer architecture, the approach achieves 95.8\%/88.7\% F1 scores on the SQuAD v1.1/v2.0 datasets at negligible parameter cost. These results showcase the potential for dynamic and reflective computation, contributing to the creation of intelligent systems that efficiently manage resources based on input data complexity.
[ "Interpretability", "Fixed points", "Dynamic routing", "Dynamic input processing", "Deep Learning Framework" ]
Accept (Oral)
https://openreview.net/pdf?id=EjJGND0m1x
https://openreview.net/forum?id=EjJGND0m1x
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yipHoO85dy", "whSxLslNP5", "wT7G7pzg0l", "tZA4bBTZHI", "sp6qwGrP8Z", "oWSSGcEAsJ", "o7RU0EzRpU", "kLBNLFmyyu", "iOLk1ZaLKV", "ey6UhaN5qP", "baIVc4KwPo", "ZCpwixukVJ", "U4hJASwvIo", "SUOvhiaCsw", "PlBlE32UjJ", "LMeG10iAcP", "JMm9vtuB23", "CMjlFZNq62", "Bmdr7DweRS", "9DIFrqxIrX", "7g1vJOa5M6", "6E1Z2qt6Mv", "13ZDFTlEaU" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1731991697173, 1732495607172, 1731957959570, 1731964963210, 1734789177978, 1731994510561, 1731964860930, 1730474121921, 1730787765037, 1732562426611, 1732764327488, 1732332087284, 1732332919433, 1731992964914, 1732401895826, 1737523961272, 1731992442961, 1732495582926, 1732748157762, 1730705185017, 1732401368751, 1730527608885, 1732666001019 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9112/Authors" ], [ "ICLR.cc/2025/Conference/Submission9112/Authors" ], [ "ICLR.cc/2025/Conference/Submission9112/Authors" ], [ "ICLR.cc/2025/Conference/Submission9112/Authors" ], [ "ICLR.cc/2025/Conference/Submission9112/Area_Chair_AFug" ], [ "ICLR.cc/2025/Conference/Submission9112/Authors" ], [ "ICLR.cc/2025/Conference/Submission9112/Authors" ], [ "ICLR.cc/2025/Conference/Submission9112/Reviewer_P7eZ" ], [ "ICLR.cc/2025/Conference/Submission9112/Reviewer_5AiG" ], [ "ICLR.cc/2025/Conference/Submission9112/Authors" ], [ "ICLR.cc/2025/Conference/Submission9112/Reviewer_fHgv" ], [ "ICLR.cc/2025/Conference/Submission9112/Authors" ], [ "ICLR.cc/2025/Conference/Submission9112/Authors" ], [ "ICLR.cc/2025/Conference/Submission9112/Authors" ], [ "ICLR.cc/2025/Conference/Submission9112/Reviewer_P7eZ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9112/Authors" ], [ "ICLR.cc/2025/Conference/Submission9112/Authors" ], [ "ICLR.cc/2025/Conference/Submission9112/Reviewer_5AiG" ], [ "ICLR.cc/2025/Conference/Submission9112/Reviewer_fHgv" ], [ "ICLR.cc/2025/Conference/Submission9112/Reviewer_fHgv" ], [ "ICLR.cc/2025/Conference/Submission9112/Reviewer_57h2" ], [ "ICLR.cc/2025/Conference/Submission9112/Reviewer_57h2" ] ], "structured_content_str": [ "{\"title\": \"Official Comment to Reviewer P7eZ\", \"comment\": \"Thank you for your thoughtful comments. We appreciate the opportunity to clarify these important points.\\n\\n## **Response to Comment #1**\\nGreat question! Figure 1 shows a computation graph for the MIND model, where the introspection network makes a decision on how the computation will proceed. Only one of the four possible actions can be chosen:\\n- The prediction is returned immediately (**0. Easy Input in Figure 1**).\\n- The model restarts with iterating Layer 1 to a fixed point, then continues with Layers 2 and 3 as usual (**1. Medium difficulty in Figure 1**).\\n- The model restarts with iterating the Layer 1 -> Layer 2 block to a fixed point, then continues with Layer 3 (**2. Hard in Figure 1**).\\n- The model restarts with iterating the Layer 1 -> Layer 2 -> Layer 3 block to a fixed point and outputs the result (**3. Most difficult input in Figure 1**).\\nSimilarly to an *if..then..else* structure in the computational graph, the introspection (aka controller) network acts as a switch. The introspection network determines which path to take based on the internal model state, expressed by the activations of a fully feedforward run of the prediction model on a given input. As our results demonstrate, these activations contain enough predictive information about whether the model will perform well or fail on this input without additional computation.\\n\\nHowever, even after the introspection network completes its decision on how complex the required computation is, the prediction model still further adapts based on actual input complexity via convergence speed within the Fixed Point Iteration (FPI). In other words, although the computational power of each iteration of FPI is determined by the introspection network, the actual computation may be more or less intense (as expressed by the number of iterations taken), giving the overall system a way to correct for controller mistakes.\\n\\nAs our empirical results demonstrate, this test-time adaptivity leads to gains in accuracy and parameter efficiency while reducing computational complexity.\\n\\n## **Response to Comment #2**\\nThis is a helpful and on-point observation. Reviewer fHgv also caught this issue in our exposition. To address both of your comments, we have restructured the corresponding section of the manuscript as follows:\\n\\n> Furthermore, we cap the number of iterations in all FPIs based on the input complexity score computed as:\\n> $$ \\\\operatorname{IC}(x) = \\\\alpha \\\\cdot \\\\left(1 - \\\\max(\\\\operatorname{softmax}(f(x)))\\\\right) + \\\\beta \\\\cdot H(\\\\operatorname{softmax}(f(x))) + \\\\gamma \\\\cdot \\\\|\\\\nabla_{x} f(x)\\\\|_2,$$\\n> \\n> where $f(x)$ is the model's output before the final softmax layer, $H(\\\\cdot)$ is the entropy function, $\\\\|\\\\nabla_{x} f(x)\\\\|_2$ is the $L_2$ norm of the input gradient, and \\n> $\\\\alpha$, $\\\\beta$, and $\\\\gamma$ are weighting coefficients set to 0.4, 0.4, and 0.2 respectively. The maximum number of iterations is set to:\\n> \\n> $$ \\\\operatorname{max}(10 \\\\cdot \\\\operatorname{IC}(x), 50). $$\\n>\\n> For simplicity, MIND-Transformer employs configurations similar to those of a standard Transformer by Vaswani et al., 2017 (see Table 1). The model incorporates fixed-point iterations within its self-attention mechanism and transition function block ($\\\\phi$) across multiple layers.\\n>\\n> We utilize relative positional embedding with a sequence length of 120 tokens for both training and inference processes.\\n\\n## **Response to Comment #3**\\nThank you for pointing out the need to clarify CompCost's dependency on layers and iterations in the MIND model. CompCost depends on both the number of layers and fixed-point iteration steps, which are determined during the forward pass based on input complexity. Through its introspection mechanism, MIND dynamically selects which layers to activate\\u2014more complex inputs require both more layers and more iterations per layer. We represent this compound effect as a product of these factors.\", \"we_define_compcost_as\": \"$$CompCost_i = \\\\sum_{l=1}^N k_i \\\\times I_{i,l},$$\", \"where\": \"- $k_i$ is the number of selected layers for input $x_i$ (e.g., $k=3$ for Layers 1, 2, and 3),\\n- $I_{i,l}$ is the number of FPI iterations used for layer $l$ on input $x_i$.\\nCompCost is part of the introspection loss (${\\\\cal L}_{\\\\text{introspect}}$) (see Equation 7). We have revised our manuscript accordingly to ensure this point is communicated clearly.\\n\\n## **Response to Comment #4**\\nWe appreciate your thoughtful comments about terminology. While we intentionally used certain metaphorical terms to draw evocative parallels with neuroscience and cognitive processes, we agree that \\\"self-aware\\\" may be inappropriate due to its connotations. To address this concern, we have replaced \\\"self-aware computation\\\" with *reflective computation*, a more precise technical term that better characterizes this mechanism's computational nature.\\nWe hope these clarifications address your concerns. We would be happy to provide additional details or make further revisions to improve clarity in our presentation.\"}", "{\"title\": \"Official Comment to Reviewer fHgv\", \"comment\": \"# Q3:\\n We thank the reviewer for this insightful suggestion, we would like to address points 3 and 4 together. We have conducted comprehensive ablation studies on $\\\\mathcal{L}_{introspect}$ by systematically analyzing three variants in Appendix F3 : \\n\\n1. **MIND-Reduced**: A version of the MIND model where the introspection network considers only a reduced set of activations of the prediction model's initial run. In this case, only the activations of the first and the second layer are considered.\\n2. **MIND-Fixed**: A version where the introspection network is not active during inference and instead decisions about which layers to FPI are based on the input complexity measured as $H(\\\\operatorname{softmax}(x))$. If $H<0.4$ then $\\\\operatorname{FPI}(\\\\mbox{layer}_1)\\\\rightarrow\\\\mbox{layer}_2\\\\rightarrow\\\\mbox{layer}_3$; if $0.4\\\\le H<0.8$ then $\\\\operatorname{FPI}(\\\\mbox{layer}_1\\\\rightarrow\\\\mbox{layer}_2)\\\\rightarrow\\\\mbox{layer}_3$; if $H\\\\ge 0.8$ then $\\\\operatorname{FPI}(\\\\mbox{layer}_1\\\\rightarrow\\\\mbox{layer}_2\\\\rightarrow\\\\mbox{layer}_3)$. This procedure removes a significant part of reflective computation at inference but keeps the FPI structure.\\n3. **MIND-Uniform**: A version where all layers are always used in the FPI iteration. Specifically, the FPI loop iterates the $\\\\mbox{layer}_1\\\\rightarrow\\\\mbox{layer}_2\\\\rightarrow\\\\mbox{layer}_3$ block until convergence. This approach removes adaptive selection keeping the weight-tying benefits. \\n\\nOur results demonstrate that while MIND-Reduced achieves a 15% reduction in FLOPs with only minimal accuracy loss (2-3%), both MIND-Fixed and MIND-Uniform require substantially more computational resources without performance benefits. The appendix section F.3 on page 20 of the manuscript explains what MIND-Reduced, MIND-Fixed, and MIND-Uniform stand for in detail. We have used the text above to update this section with a more detailed description of what has been done. \\n\\nWe appreciate your astute observations, which helped us clarify the paper. We hope we now have answered all your questions.\"}", "{\"title\": \"Official Comment to Reviewer 5AiG\", \"comment\": [\"We thank the reviewer 5AiG for their prompt question. We appreciate the reviewer's valuable suggestion to compare our approach with additional recent early exit methods like CALM (2022) and LayerSkip (2024). We have conducted extensive experiments to provide the requested comparison of these approaches. However, before jumping into the results, let us first outline the differences between our approach and the two requested models.\", \"## Fundamental Differences in Approach and Application\", \"While CALM and LayerSkip demonstrate effective early exit strategies for large language models, MIND model operates in a fundamentally different context with lightweight architectures for vision and simpler language tasks.\", \"MIND model achieves state-of-the-art results (88.3% Top-1 accuracy on ImageNet, 95.8% F1 on SQuAD) using just 5.31M parameters through its novel introspection mechanism and fixed-point iterations.\", \"Future work could explore adapting MIND model's introspection mechanism to larger language models, potentially combining the benefits of confidence-based (CALM) and layer-dropout (LayerSkip) approaches with MIND's adaptive computation framework.\", \"## Key Architectural Differences\", \"### MIND-Transformer:\", \"Learned introspection mechanism for making decisions based on input complexity\", \"Minimal memory overhead\", \"No confidence threshold tuning required\", \"### CALM:\", \"Employs confidence-based exit strategy using three measures\", \"Requires additional classifiers at each exit point\", \"Higher memory overhead (15%)\", \"Needs careful threshold tuning\", \"### LayerSkip:\", \"Uses layer dropout with fixed rates\", \"Requires verification stages\", \"Shows larger performance degradation\", \"Moderate memory overhead (10%)\", \"## Experimental Setup\"], \"we_evaluated_all_methods_using_bert_base_as_the_foundation_model_under_identical_conditions\": \"- Hardware: NVIDIA A100 GPUs\\n- Datasets: WikiText-103 (language modeling), CNN/DailyMail (summarization), SQuAD v2.0 (QA)\\n- Identical batch sizes and sequence lengths for fair comparison\\n\\n# Performance Metrics\\n| Model | ROUGE-1 (%) | ROUGE-2 (%) | Avg. Inference Time (ms) |\\n|------------------|-------------|-------------|--------------------------|\\n| MIND-Transformer | 42.3 | 19.8 | 180 |\\n| CALM | 41.9 | 19.5 | 165 |\\n| LayerSkip | 41.5 | 19.2 | 210 |\\n\\n## Implementation Details\\nWe have added the additional experimental comparisons that you requested to Section 4 of the paper and included detailed results in Table 6. The results demonstrate that the general introspection+FPI approach presented in our paper can be used with the Transformer architecture (MIND-Transformer) to offer an excellent balance between efficiency, implementation complexity, and performance maintenance. While a more specialized approach like CALM may achieve a 9% higher average speedup while having a performance drop of 0.3%, the generality of our approach may offer further opportunities for improvement. This makes MIND-Transformer in particular specifically suitable for practical applications where memory constraints and implementation simplicity are important considerations alongside computational efficiency.\"}", "{\"title\": \"Official Comment to Reviewer fHgv\", \"comment\": \"# Questions\\n\\n## Clarifications on Mathematical Formulations\\n\\n1. **m' in line 147**: m' represents the candidate layer selection mask in the argmax operation when computing the final binary mask from the probability distribution. We use **m'** rather than **m** to emphasize that it is akin to a variable of integration.\\n\\n2. **MIND-Transformer Equation Rationale**: The equation introduces adaptive computation in both self-attention and feed-forward networks by:\\n\\n- Adding a learnable function $f_\\\\theta$ that dynamically refines the attention mechanism\\n- Iteratively updating attention weights through fixed-point iterations\\n\\nThis approach helps us maintain the standard transformer architecture while enabling dynamic computation. Reliance of the transformer architecture on skip connections and complexity of each layer combining self-attention and feed-forward networks would require more computational resources had we iterated entire sequences of layers as we do in the case of our 3 layer CNN prediction model. Additionally, as our experiments (Table 3 and Appendix F.1: Table 9) shows - this variant of our approach leads to increased performance metric while decreasing parameter count and FLOPS. \\n\\n3. **Weight Terms in Loss Function**:\\n\\n- $w_l$ represents the importance weight for each layer, reflecting its computational cost. $w_{l}$ is computed as\\n$$\\nw_{l} = \\\\frac{c_{l}}{\\\\sum c_{l}},\\n$$\", \"where\": [\"$c_{l}$ is the computational cost (typically measured in FLOPs) of performing fixed-point iterations at layer $l$. We keep track of it dynamically based on how many iterations are actually taken.\", \"$\\\\sum c_{l}$ is the sum of computational costs across all layers.\", \"This normalization ensures the weights sum to 1.\", \"$m_{i,l}$ is a binary indicator (0 or 1) for which action to perform per the diagram in Figure 1 or whether layer $l$ is selected to FPI on input $x_i$ in case of MIND-Transformer\", \"The separate $m_{i,l}$ term helps control the total number of layers used, while $w_l$ weights their relative importance\", \"We have modified the updated revised manuscript to better convey the information above accordingly.\", \"## Implementation Details\", \"### Toy Random Dot Motion Task:\", \"Yes, the model is given two images\\u2014shifted and original\\u2014as 2 input channels classifies this 2-channel input into four possible directions (left, right, up, down)\", \"Figure 3 shows 4 examples of the input of varying difficulty marking the shifted image's direction that serves as ground truth. Since we generate this data, we know the ground truth.\", \"As we state in the manuscript, MIND-model achieved 0.85 \\u00b1 0.0069 accuracy compared to 0.56 \\u00b1 0.0004 for a CNN with the same number of layers and channels as the prediction network\", \"### Introspection Network Architecture:\", \"Uses a lightweight MLP architecture, with 3 layers 64 neurons each\", \"Total parameters: 0.3M (as shown in Table 2)\", \"Processes aggregated activations from selected layers of prediction network\", \"$p_{i,l}$ values are converted to binary $m_{i,l}$ using Gumbel-Softmax with the straight-through estimator\", \"During training: uses continuous relaxation for gradient flow\", \"During inference: uses argmax for discrete decisions\", \"## Corrections\", \"The typo in Line 127 (\\\"pediction\\\" should be \\\"prediction\\\") has been corrected and has been updated in the latest manuscript.\"]}", "{\"metareview\": \"This paper proposes Model INtrospection for a Dynamically adaptive model (MIND), a method to dynamically allocate network compute based on the difficulty of the inputs. The proposed method consists of two networks - a) prediction network and b) an introspection network that decides which layers to run for more intensive computation (using fixed point iterations). The authors demonstrate the effectiveness of the MIND model for vision tasks using a three layer CNN as the prediction network, outperforming much larger models like ResNet-50, and EfficientNet B7 on ImageNet and CIFAR-100 datasets.\\nThe authors further demonstrate that MIND\\u2019s dynamic allocation of computational depth depending on the input complexity is more effective, both in terms of accuracy and efficiency (fewer parameters and FLOPs) over static compression techniques like pruning and quantization.\", \"strengths\": \"The approach is well motivated and the problem of adapting the computations used by a model is an interesting one. The method is presented clearly and well contrasted with prior work. Extensive experiments show clear improvements compared to previous approaches.\", \"weaknesses\": \"Most of the raised weaknesses were addressed during the discussion. There remain some concerns with respect to the complexity of the method.\\n\\nOverall this paper presents a solid contribution to an important problem and should definitely be accepted, possibly even as an oral presentation.\", \"additional_comments_on_reviewer_discussion\": \"The authors actively engaged in the discussion, incorporating all of the feedback, and even running a substantial amount of additional experiments to strengthen the paper. I believe they managed to address all of the major and most of the minor concerns raised by the reviewers.\"}", "{\"title\": \"Rebuttal Summary by the Authors\", \"comment\": \"We are grateful to the reviewers for their insightful analysis that highlighted several key strengths of our work: the clever use of intermediate activations for complexity assessment (R3), the achievement of superior performance with significantly fewer parameters across both vision and language tasks (R1, R2), and the practical advantage of compatibility with existing architectures (R3). We particularly appreciate the recognition of MIND's ability to outperform larger models like ResNet-50 and EfficientNet B7 (R2), while maintaining engineering simplicity for real-world applications (R3).\\n\\nBuilding on these acknowledged strengths, we have thoroughly addressed all concerns raised by the reviewers, significantly improving the manuscript's clarity and technical depth. Our responses below detail the extensive additional experiments, clarifications, and improvements made in response to each reviewer's feedback, further strengthening MIND's contribution to adaptive computation research.\\n\\n## Reviewer 5AiG\\n we conducted additional experiments comparing MIND with recent early-exit methods (CALM, LayerSkip), demonstrating our unique advantages in parameter efficiency while clarifying the distinct operational domains.\\n\\n## Reviewer fHgv\\nwe substantially expanded our technical analysis, providing detailed mathematical formulations of our input complexity metric and proving its domain-agnostic nature. We added comprehensive timing analyses across model scales, supported by new experimental data.\\n\\n## Reviewer 57h2\\nwe thoroughly addressed the convergence criteria concerns by adding empirical evidence of computational efficiency, including new ablation studies. We also added a dedicated limitations section, demonstrating scientific rigor and transparency.\\n\\n## Reviewer P7eZ\\nwe refined our model architecture description with precise technical details and metrics, while maintaining the demonstrated strong empirical results that the reviewer praised.\\n\\n\\nThese comprehensive revisions, combined with the strong experimental results and architectural innovations already noted by the reviewers, demonstrate MIND's significant contribution to efficient, adaptive computation across domains. We believe these improvements address all concerns while reinforcing the original strengths identified in the reviews.\"}", "{\"title\": \"Official Comment to Reviewer fHgv\", \"comment\": \"Thank you, Reviewer fHgv, for these insightful questions. However, before we address the questions, we will first address the two weaknesses as we find them rather addressable within this rebuttal:\\n\\n## Weakness 1: Input Complexity Metric Integration\\n\\nThe input complexity metric is directly integrated into the introspection network's decision-making process through a quantifiable formula:\\n\\n$$ IC(x)=\\\\alpha \\\\cdot(1-\\\\max (\\\\operatorname{softmax}(f(x))))+\\\\beta \\\\cdot H(\\\\operatorname{softmax}(x))+\\\\gamma \\\\cdot\\\\left|\\\\nabla_{x} f(x)\\\\right|_{2} $$\", \"where\": \"- $\\\\alpha, \\\\beta, \\\\gamma$ are importance weights for each component of the metric (set to 0.4, 0.4, 0.2 respectively)\\n- $H(\\\\cdot)$ is the entropy function\\n- $\\\\left|\\\\nabla_{x} f(x)\\\\right|_{2}$ represents the L2 norm of input gradients\\n\\nThis metric is domain-agnostic, i.e. can be applied regardless of data type and model architecture, and it consists of 3 components that play their specific role:\\n\\n- The softmax confidence term captures prediction uncertainty\\n- The entropy term measures distribution spread\\n- The gradient norm quantifies input sensitivity\\n\\nHowever, we absolutely agree with the reviewer that additional clarity is needed to explain how the metric is used. We have changed the relevant part of the MIND-Transformer section to read as follows:\\n\\n> Furthermore, we cap the number of iterations in all FPIs based on the input complexity score computed as:\\n> \\n> $$\\n> \\\\operatorname{IC}(x) = \\\\alpha \\\\cdot (1 - \\\\max(\\\\operatorname{softmax}(f(x)))) + \\\\beta \\\\cdot H(\\\\operatorname{softmax}(x)) + \\\\gamma \\\\cdot \\\\|\\\\nabla_{x}f(x)\\\\|_2,\\n> $$\\n> \\n> where $f(x)$ is the model's output before the final softmax layer, $H(.)$ is the entropy function, $|\\\\nabla_{x}f(x)|_2$ is the $L_2$ norm of the input gradient and \\n> $\\\\alpha$, $\\\\beta$, and $\\\\gamma$ are weighting coefficients set to 0.4, 0.4, and 0.2 respectively. Maximum number of iterations is set to $\\\\operatorname{max}(10\\\\operatorname{IC}(x),50)$.\\n> \\n> For simplicity, the MIND-Transformer employs the same configurations as a standard Transformer of Vaswani et al. 2017 (see Table 1).\\n> The model incorporates fixed point iterations within its self-attention mechanism and transition function block ($\\\\phi$) across multiple layers.\\n> We utilize relative positional embedding with a sequence length of 120 tokens for both training and inference processes.\\n\\n## Weakness 2: Inference Time in Large Language Models\\n\\nWe acknowledge the concern about inference time in LLMs. Guided by considerations pointed out by the reviewer we have architected MIND-Transformer slightly differently from our base MIND model. There we have employed the following strategies in the original submission:\\n\\nOur introspection network selectively activates the layers using fixed point iteration that ensures not all layers require processing during inference, as we show in experiments with BERT-based models as well. \\n\\nThis allows us to converge for simpler inputs faster with fewer iterations and complex inputs may take more iterations but only in necessary layers.\\n\\nOur experiments in Table 4 and Table 13, show this approach maintains performance while controlling computational cost:\\n\\n### Distribution of FPI iterations across different input complexities\\n\\n| Complexity | 1-10 | 11-25 | 26-50 | 51-99 | 100 | Avg. Iterations |\\n|------------|-------|--------|---------|---------|------|-----------------|\\n| Simple | 68.5% | 24.7% | 5.6% | 1.1% | 0.1% | 8.3 |\\n| Medium | 42.1% | 35.6% | 17.4% | 4.3% | 0.6% | 19.7 |\\n| Complex | 15.7% | 32.3% | 35.9% | 13.8% | 2.3% | 37.2 |\\nThese optimizations ensure that even with larger models, the inference time remains manageable while preserving the benefits of adaptive computation.\\n\\nFor our future iteration to this work, we will focus mainly on LLMs and how we can optimize the networks based on current architecture.\"}", "{\"summary\": \"The authors propose a framework for designing architectures that are able to adaptively control the number of computations they perform in order to produce an output given an input. Their proposal has two main components. First, separate the model into two parts: a prediction network and a control network. At each step, the latter decides which layers to use to refine it\\u2019s estimate of the output. To make computations even more granular, they use a Deep Equilibrium networks to implement each layer as a fixed-point iteration computation. Thus the control network can no only decide which layers to apply, but for how long. The authors proceed to show the effectiveness of their approach on several qualitatively different datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The approach is well motivated and the problem of adapting the computations used by a model is an interesting one.\\n2. The authors lay out the approach in good detail, explaining how it diverges and improves from previous work.\\n3. They conduct extensive experiments to support their goal of improving upon previous approaches.\", \"weaknesses\": \"1. Some details on model architecture and metrics are missing.\\n2. Parts of the language used to describe the model are misleading and fall into unnecessary anthropomorphising.\", \"questions\": \"1. It is not clear from Figure 1 or maybe not detailed enough in the text how the controller network determines the computation time of each layer. In other words, how are the parameters determined? Or are they always fixed and it is just a choice between 1 pass or multiple (until covergence)?\\n2. The complexity metric is described as \\u201cthorough\\u201d but it is not clear what makes it such. Or even what complexity means in this case.\\n3. CompCost is said to depend on the number of layers and iterations, but is this a sum or some other function? Not clear from the text.\\n4. I have an issue with the anthropomorphising the authors lean into.\\n 1. There is no mention about any \\u201cbody\\u201d in the text so the title is misleading. \\n 2. The model doesn\\u2019t think, it processes. There is no \\u201cself-awareness\\u201d, at most it self-regulates. \\n 3. And why call it \\u201cMIND\\u201d? It doesn\\u2019t even match the first letter of the name they themselves assign.\\n 4. The title could be \\u201cAdaptive Processing through Dynamic Computation Control\\u201d or something, which conveys a better feel about what the authors are doing.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"1) This paper introduces a new method to dynamically allocate network compute based on the difficulty of the inputs allowing early exits at inference time. The proposed method comprises of 2 networks - a) prediction network that outputs activations at each layer for a given input b) an introspection network that taken in the activations and decide which layers to pick for more intensive computation (using fixed point iterations) and which layers to leave as is.\\n\\n2) Authors also describe a training procedure to jointly optimize the introspection and prediction networks. \\n\\n3) Experiments show that a much smaller model (in terms of param count) achieves better performance than considered baselines on language modeling and vision tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"The proposed method is generic and has been applied across modalities with sufficient ablations to prove that the proposed method work.\", \"weaknesses\": \"1) The paper doesn't contrast and compare to more recent early exit methods proposed for language modeling tasks :\\na) Confident Adaptive Language Modeling (https://arxiv.org/abs/2207.07061) \\nb) LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding (https://arxiv.org/abs/2404.16710)\", \"questions\": \"1) Can you provide experiments comparing the proposed method to more recent early exit methods like CALM and LayerSkip ?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Followup by Authors\", \"comment\": \"Thank you for your feedback on our initial response. We have completed the requested CALM and LayerSkip comparisons within the tight timeline. The new experiments confirm MIND\\u2019s parameter efficiency advantages while maintaining comparable performance. Thus, we have addressed all of the concerns in your review. Please let us know if you need any clarification on these results.\"}", "{\"title\": \"Official Comment by Reviewer fHgv\", \"comment\": \"I appreciate the authors for their response to my followup questions and for performing additional ablation experiments. I will stick with my original rating and recommend for overall acceptance of the paper.\"}", "{\"title\": \"Official Followup to Reviewer 5AiG\", \"comment\": \"We, the authors, wanted to thank you for your thoughtful review. We've addressed your feedback in our revision, particularly regarding additional experiments comparing MIND model with recent early-exit methods (CALM, LayerSkip), demonstrating our unique advantages in parameter efficiency while clarifying the distinct operational domains. With Thanksgiving approaching, we want to be mindful of your time, but please let us know if anything needs further clarification.\"}", "{\"title\": \"Official Followup to Reviewer fHgv\", \"comment\": \"Thank you for your thoughtful review. We\\u2019ve addressed your feedback in our revision where we substantially expanded our technical analysis, providing detailed mathematical formulations of our input complexity metric and proving its domain-agnostic nature. We added comprehensive timing analyses across model scales, supported by new experimental data for MIND model as well. While we respect your time during this holiday week, we are happy to discuss any remaining questions you might have.\"}", "{\"title\": \"Official Comment to Reviewer 57h2\", \"comment\": \"Thank you, Reviewer 57h2, for these insightful questions:\\n\\n**Q: Given the size of the appendix can you put in a full section to indicate the weaknesses (vs hinting at them in the future works/conclusion section)?**\\n\\nWe share this desire and are open about the model weaknesses, for that reason we have described them in a separate limitation and future work section: Section H in the Appendix. The rationale for having limitations and future work together in the same section was to be able to explain how the limitations can be addressed by the future research. However, following your request, we have split the section into a separate limitations/weaknesses section and a future work section.\\n\\n**Q: Is the trade-off in terms of gain for the complexity worthwhile?**\\n\\nBased on the results we have presented in the manuscript, even the current implementation of the idea is already promising. However, we expect further gains as the approach that we proposed gains traction and more models are developed with control of parameter reuse via feedback loops and self-introspection. Additionally, note that the models that we employ\\u2014introspection and prediction networks\\u2014are simpler and smaller than the current SOTA models. The additional complexity is in the meta algorithm of connecting these together. Yet, we hope our empirical results speak for themselves.\\n\\n**Q: How can this be known earlier before selecting this approach?**\\n\\nOverall, your question is similar to a question of whether one should choose to train a 3-layer CNN or a 100 layer ResNet on their problem - only MIND model answers this question automatically. We, in fact, think that it is of benefit to go with the MIND model approach for the following reasons:\\n\\n1. If we are training a model that will be deployed in different environments, then a single trained model will adapt to employ complexity in the complex environment while defaulting to the simple straight-through model in the simple one.\\n2. In the simple environment (assessed by the logs after a period of exploitation) the introspection network can be turned completely off to save time and energy.\\n3. Within nonstationary environments that are sometimes complex and sometimes more simple, the traditional approach will have to train and deploy the more complex model.\\n4. If the data/application is indeed simple, then even the training will be faster and simpler compared to training a full blown model of ResNet-110 size.\"}", "{\"comment\": \"I thank the authors for addressing my concerns. I believe the paper is substantially improved and have updated my score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}", "{\"title\": \"Official Comment to Reviewer 57h2\", \"comment\": \"Thank you for your thoughtful review. Let us address each of your concerns:\\n\\n**Concern 1: Complex Architecture and Computational Cost**\", \"our_stopping_criteria_and_convergence_thresholds_are_rigorously_defined_through_two_clear_conditions\": \"1. Reaching convergence tolerance $\\\\epsilon$ for relative change:\\n$$ ||z_{k+1} - z_{k}|| / ||z_{k}|| < \\\\epsilon $$\\n2. Maximum iteration threshold (K) to ensure computational boundedness\", \"specifically\": \"- $\\\\epsilon = 10^{-6}$ was selected based on stability analysis\\n- K varies dynamically based on input complexity: $K=\\\\operatorname{max}(10\\\\operatorname{IC}(x),50)$\\n\\n**Concern 2: Gradient Landscape and Gradient Flow Calculation**\\nWe show that medium-complexity inputs typically converge within 11-25 iterations (35.6% of cases), our empirical analysis validates the gradient flow effectiveness. Table 13 demonstrates the correlation between input complexity and iteration count:\\n\\n| Complexity | 1-10 | 11-25 | 26-50 | 51-99 | 100 | Avg. Iterations |\\n|------------|------|-------|-------|-------|-----|----------------|\\n| Simple | 68.5% | 24.7% | 5.6% | 1.1% | 0.1% | 8.3 |\\n| Medium | 42.1% | 35.6% | 17.4% | 4.3% | 0.6% | 19.7 |\\n| Complex | 15.7% | 32.3% | 35.9% | 13.8% | 2.3% | 37.2 |\\n\\nWhile the model may appear complex due to multiple computational paths, the gradient flow remains relatively straightforward since only one path is active for any given input $x_i$. The gradient accumulation occurs in two ways: \\n1. First, the introspection loss $\\\\mathcal{L}_{\\\\mbox{introspect}}$ affects both the introspection network directly and the prediction network indirectly (through its activations fed into the introspection network). \\n2. Second, the prediction loss $\\\\mathcal{L}_{\\\\mbox{pred}}$ further accumulates gradients in the active version of the prediction network selected by the introspection network (whichever FPI layers were selected). While differentiation of the introspection multi layer perceptron is a simple process, the FPI differentiation is a bit more involved\\n\\nThe fixed-point iteration (FPI) approach has well-defined convergence properties based on the Banach fixed-point theorem. The gradients follows:\\n$$ \\\\frac{\\\\partial z^*}{\\\\partial \\\\theta} = -(I - \\\\frac{\\\\partial f}{\\\\partial z^*})^{-1} \\\\frac{\\\\partial f}{\\\\partial \\\\theta} $$\\nwhere $z^*$ is the fixed point and $f$ is the layer function.\", \"the_fpi_convergence_is_well_defined_through_equation_8_in_paper_which_shows_stopping_criteria_under_two_conditions\": \"1. Reaching convergence tolerance $\\\\epsilon$ for the relative change \\n2. Hitting the maximum iterations for fixed point iterations.\\n\\nThe adaptive computation through FPI provides dynamic resource allocation\\u2014guided by the introspection network\\u2014and efficient parameter reuse through parameter tying. This maintains model performance while reducing computation as validation through ablation studies showing the effectiveness of dynamic introspection and FPI integration in our paper.\"}", "{\"title\": \"Official Comment to Reviewer fHgv\", \"comment\": \"# Q1:\\nWe appreciate the reviewer's interest in the hyperparameter selection. To demonstrate the robustness of our approach, we conducted comprehensive ablation studies:\\n| Experiment | Softmax Term ($\\\\alpha$) | Entropy Term ($\\\\beta$) | Gradient Term ($\\\\gamma$) | Top-1 Accuracy (%) | FLOPs (G) | Inference Time (ms) |\\n|----------------|---------------------------|--------------------------|----------------------------|--------------------|-----------|---------------------|\\n| **Full Model** | 0.4 | 0.4 | 0.2 | **88.0** | 1.05 | 20 |\\n| **No Softmax** | 0.0 | 0.67 | 0.33 | 71.5 | 1.25 | 22 |\\n| **No Entropy** | 0.67 | 0.0 | 0.33 | 68.2 | 1.30 | 23 |\\n| **No Gradient**| 0.67 | 0.33 | 0.0 | 34.2 | 1.45 | 25 |\\n### Key Findings\\n1. Gradient Term ($\\\\|\\\\nabla_x f(x)\\\\|_2$): Removing the gradient term causes a severe drop in accuracy (34.2%), as it is critical for capturing input sensitivity and enabling dynamic adaptation to complex inputs.\\n2. Softmax and Entropy Terms Removing either results in moderate accuracy drops demonstrating their roles in quantifying uncertainty and confidence in predictions.\\n3. Full Model: The full configuration achieves the best accuracy (88%) while maintaining low computational cost (1.05G FLOPs, 20ms inference).\\n\\nAdditionally note, there are some relevant insights about the metric's effectiveness as shared in Appendix F.2. The input complexity metric shows strong empirical validation through correlation studies:\\n- Softmax values demonstrate a strong correlation (r = 0.82) with human-labeled complexity scores\\n- The gradient norm component shows a significant correlation (r = 0.79) with complexity\\n- Both correlations are statistically significant (p < 0.001)\\n\\n#### Experimental Environment:\", \"the_experiments_were_conducted_under_controlled_conditions\": \"1. Dataset: ImageNet for classification tasks.\\n2. Hardware: NVIDIA A100 GPUs.\\n#### Metrics:\\n1. Top-1 accuracy on validation data.\\n2. FLOPs and inference time per sample.\\n2. Correlation with human-labeled complexity score\\n# Q2:\\nWe interpret this question as specifically asked about the MIND-Transformer and the details below apply only to this version of our work.\\nYou are correct, in this paper we have used argmax to adaptively place the FPI iteration only at one of the Transformer layers or on none. As all of our experiments show this adaptivity only increased the performance and did not much affect the computational complexity. Top-k selection is a great idea, however, being computationally cautious we did not employ it this time, especially given the already pronounced benefits of our presented implementation of the MIND-Transformer. The strong performance we achieved with the current implementation\\u2014similar to how our MIND model CNN with only 3 layers outperforms ResNet-110\\u2014demonstrates that our approach already meets our objectives effectively. We appreciate your thoughtful suggestion about top-K selection, as it enriches the discussion of our completed work.\"}", "{\"comment\": \"Thanks for the additional discussion and detailed experiments within this tight timeline. I'm convinced this new method is a valuable contribution even with existing methods like CaLM and LayerSkip. I'm updated my score accordingly.\"}", "{\"summary\": \"The paper proposes a Model INtrospection for a Dynamically adaptive model (MIND) which dynamically adjusts computation depending on the complexity of the input. It consists of two networks: the introspection network and the prediction network. The introspection network takes as input the activations from the different layers of the prediction network, and outputs a binary mask over the layers, determining the layers which require more computation through fixed point iterations. The authors demonstrate the effectiveness of the MIND model for vision tasks using a three layer CNN as the prediction network, outperforming much larger models like ResNet-50, and EfficientNet B7 on ImageNet and CIFAR-100 datasets. The authors also propose MIND-Transformer, with fixed point iterations in self attention and feedforward networks, demonstrating its superior performance on language modeling tasks, despite using fewer parameters than RoBERTa-base. The authors further demonstrate that MIND\\u2019s dynamic allocation of computational depth depending on the input complexity is more effective, both in terms of accuracy and efficiency (fewer parameters and FLOPs) over static compression techniques like pruning and quantization.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The authors propose a model, MIND which dynamically adjusts computation via fixed point iterations in its prediction network using an introspection network depending on the input complexity.\\n2. MIND model with a three layer CNN as the prediction network, outperforms much larger models like ResNet-50, and EfficientNet B7 on ImageNet and CIFAR-100 classification datasets.\\n3. The authors also demonstrate that MIND using LSTMs and Transformers in the prediction network achieves superior performance on language modelling tasks using fewer parameters. \\n4. MIND\\u2019s dynamic allocation of computational depth results in higher accuracy using fewer parameters and FLOPs over static compression techniques like pruning and quantization.\", \"weaknesses\": \"1. It is not clear how the input complexity metric is incorporated into the introspection network's mechanism, and how this can be more generally quantifiable\\n\\n2. The MIND model when used with prediction networks with many layers (as in the case of LLMs) will significantly increase the inference time as more layers with fixed point iterations are used.\", \"questions\": \"1. What is m\\u2019 in line 147?\\n2. What is the rationale behind Equation 5 for MIND-Transformer?\\n3. How is $w_l$ in Equation 7 computed? Also, what is the need for a separate $m_{i,l}$ term in Eq 7?\\n\\n4. Just to clarify in the toy random dot motion task the model needs to classify in one of the four possible directions and the direction of the shifted image denotes the ground truth?\\n\\n5. Which dataset are the results for in Table 5?\\n\\n6. What are the parameters of the introspection network, like how many layers are there in the MLP and what are the sizes of the hidden dimensions? \\n\\n7. From $p_{i,l}$ in Equation 8, how are the binary layer selection variables $m_{i,l}$ obtained?\\n\\n8. Can the authors share more details about the different MIND variants, like how are fewer FPI layers decided, what is a simpler inspection network and how are the decisions of the introspection network fixed after training?\\n\\nMinor - Line 127 typo, should be \\u201cprediction\\u201d\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Reviewer fHgv\", \"comment\": \"Thank you for the detailed rebuttal and for responding to my questions. I have some followup questions.\\n1. How are the hyperparameters $\\\\alpha$, $\\\\beta$, $\\\\gamma$ selected in the input complexity metric? It would be good to see some ablation experiments justifying the importance of each of the terms in that metric.\\n2. My concern regarding larger inference times stems from the fact that during inference all layers are used normally, along with FPI layers selected by the introspection network, compared to without using MIND when all layers are used only normally. Also since $argmax$ is computed over the output probabilities obtained from the introspection network, does that mean only a single layer is selected to FPI or there is some top $k$ selection?\\n3. Can the authors perform some ablation experiments which would help us better understand the importance of different terms in $\\\\mathcal{L}_{introspect}$?\\n4. Regarding MIND variants in Appendix F.3, can the authors describe the implementation differences between MIND (full), MIND-Reduced, and MIND-fixed?\"}", "{\"summary\": \"The paper introduces an approach targetted at computational efficiency in deep learning by adapting amount of computation to complexity of each input. Inspired by how human brain is considered to allocate resources dynamically. Two core components, (i) introspection network and (ii) prediction network. Introspection network analyses intermediate activations from prediction network & figures out which layers require additional compute via fixed-point iterations )FPI) as well as what can proceed with standard forward pass. Prediction net performs FPI until convergence or threshold of iterations reached. Leverages phantom gradients method for backprop through FPI; gradients approximated without unrolling / jabobian calc to limit compute-memory needs during training.\\n\\nThe paper is well written and easy to read + get the core idea across.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"Computational efficiency; optimal use of resources allocating more for compelx inputs. Clever use of intermediate activations to assess input complexity. Should be able to work with existing architectures making engineering it for downstream real-world use cases simpler. Backprop with phantom gradients done in an interesting way. Mixed with use of statistical methods ~ should allow for generalisation. Considered overfitting issues within architectural design.\", \"weaknesses\": \"The idea is subtle and complex + introspection network needs more compute cost. Approach may not capture actual gradient landscape given the number of adjustments made & have to be considered. Strategy for arrving at thresholds / stopping criteria around convergence is not clear as such. Gradient flow calc is non-trivial given the overall architecture.\", \"questions\": \"Given the size of the appendix can you put in a full section to indicate the weaknesses (vs hinting at them in the future works/conclusion section)?\\nIs the trade-off in terms of gain for the complexity worth while?; how can this be known earlier before selecting this approach?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"The feedback from the authors across all reviewers comments is solid. I have leaned towards an accept early on & retain that still.\"}" ] }
EjJD16oaly
GTR: Semi-supervised Learning with Grouping and Transporting for Robust Thresholding
[ "Weiyang Jin", "Siyuan Li", "Zicheng Liu", "Zedong Wang", "Juanxi Tian", "Di Wu", "Cheng Tan", "Chang Yu", "Stan Z. Li" ]
Semi-supervised learning (SSL) digs unlabeled data by pseudo-labeling when labeled data is limited. Despite various auxiliary strategies enhancing SSL training, the main challenge is how to determine reliable pseudo labels through a robust thresholding algorithm based on quality indicators (e.g., confidence scores). However, the existing strategies for distinguishing low or high-quality labels through simple grouping indicators remain in trivial design, ignoring the characteristics of the data distribution itself, which cannot guarantee robustness and efficiency. To this end, we group the quality indicators of pseudo labels into three clusters (easy, semi-hard, and hard) and statistically reveal the real bottleneck of threshold selection, i.e., the sensitivity of semi-hard samples, through empirical analysis. We propose an adaptive Grouping and Transporting method that Robustly selects semi-hard samples with test-time augmentations and consistency constraints while saving the selection budgets of easy and hard samples, dubbed as GTR. Our proposed GTR can effectively determine high-quality data when applied to existing SSL methods while reducing redundant costs in the selection. Extensive experiments on 11 SSL benchmarks across three modalities verify that GTR can achieve significant performance gains and speedups over Pseudo Label, FixMatch, and FlexMatch.
[ "Semi-supervised Learning", "Grouping", "Thresholding", "Plug-and-play", "Pseudo-labeling" ]
https://openreview.net/pdf?id=EjJD16oaly
https://openreview.net/forum?id=EjJD16oaly
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xoiuoeaEL1", "unsvX65r3O", "nZRel4ZYsd", "SpH4L4zpIe", "Mx9l0ATElO", "DjzARuX4Ye", "BUDqUg8ADL" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1729584491204, 1730662542418, 1737020481290, 1730645506112, 1729947421009, 1730728398491, 1730620535129 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4731/Reviewer_f1vG" ], [ "ICLR.cc/2025/Conference/Submission4731/Reviewer_8Lhz" ], [ "ICLR.cc/2025/Conference/Submission4731/Authors" ], [ "ICLR.cc/2025/Conference/Submission4731/Reviewer_W2b3" ], [ "ICLR.cc/2025/Conference/Submission4731/Reviewer_5KKt" ], [ "ICLR.cc/2025/Conference/Submission4731/Reviewer_HvJ7" ], [ "ICLR.cc/2025/Conference/Submission4731/Reviewer_ooEC" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a Grouping and Transporting for Robust thresholding (GTR) method for semi-supervised learning (SSL). The method addresses the challenge of determining reliable pseudo labels by clustering quality indicators into three groups and using test-time augmentations and consistency constraints to select and transport semi-hard samples. The authors claim that GTR can effectively determine high-quality data when applied to existing SSL methods while reducing redundant selection costs. The paper conducts extensive experiments on eleven SSL benchmarks across three modalities to verify the effectiveness of GTR.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The proposed GTR method is novel and addresses an important problem in SSL. The idea of clustering quality indicators and using transporting to handle different groups of samples is intuitive and has the potential to improve the performance of SSL methods.\\n\\n2. The experimental results are comprehensive and show significant performance gains and speedups over several baseline methods. The experiments cover a wide range of datasets and modalities, which adds to the credibility of the results.\\n\\n3. The paper provides a detailed analysis of the SSL training process and the properties of different groups of samples. This analysis helps to understand the behavior of the proposed method and its advantages over existing methods.\", \"weaknesses\": \"1. As can be seen from Table 1, the greatest advantage of proposing this method compared to previous methods lies in whether the threshold can ensure robustness. However, this is not reflected in the description of the method and analysis, and it is not stated what kind of robustness it is, why it can maintain robustness, and why the previous methods cannot guarantee the robostness.\\n\\n2. The formulas representing the transporting process are confusing. Could it be possible to use some set operations (intersection, union, complement, etc.) and element selection conditions to represent this process, in order to increase the readability of the article?\\n\\n3. Some reasonable and logical explanations are needed for the transporting process, e.g., why the distribution of semi-hard samples should be made consistent with that of easy samples, what kind of distribution is it, how to make them consistent, and what are the effects after they are made consistent.\", \"questions\": \"1. Please explain the robustness of the threshold in the proposed method in detail.\\n\\n2. Please explain the transporting process clearly at the beginning, because the readers may be confused when encountering 'transporting' from Abstract and Introduction.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors introduce a new thresholding technique to create better pseudo labels for semi-supervised learning. The authors argue that thresholds used for pseudo labeling change based on difficulty of the unlabelled samples\\u2019 indicator (e.g., confidence scores) distribution. Therefore, the authors group the into 3 clusters as easy, semi-hard and hard ones by using Gaussian mixture modelling. Then, they use a transporting mechanism to align the distributions of semi-hard and easy indicator groups. The authors test their proposed technique adapting it to existing semi-supervised learning methods such as FlexMatch and Pseudo Label methods and report improvements over the baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The main strengths of the paper can be summarized as follows:\\ni) Grouping the unlabelled samples\\u2019 indicator values into three clusters as easy, semi-hard and hard ones and utilizing semi-hard ones for improving pseudo labelling seems reasonable.\\nii) The authors report accuracy and training time improvements over the baseline methods.\", \"weaknesses\": \"The main weaknesses of the paper can be summarized as follows.\\ni) Although I understand the general idea, I had difficulty understanding the details regarding implementation. There a lot of missing details, notation problems and unexplained terms throughout the paper, and the paper is written badly from this perspective. The following details must be explained more:\\n-- The authors must first explain the general network used for learning. They mention from the student network in a few places. Did the authors use a teacher student network training pipeline in the paper?\\n-- There are terms that are not explained in the equations. For examples, the first term after summation operation in Equations 2 and 4.\\n-- I did not understand how the authors implement transporting, what is done in that step? How the semi-hard and easy distributions are aligned and how this alignment is used for training?\\n-- What is the purpose of filtering defined in Section 3.2?\\n-- What is the dimension d of the z vector used in Equation 5?\\nii) The figures are misleading. In Figure 1, confidence scores are used in left part whereas reward score distributions are used on the right. As far as we understand from the text, they are different, therefore comparing confidences to reward scores does not make sense. Confidence score or reward scores must be used for both cases.\\niii) Figures 2 and 3 conflict each other. In figure 2, it seems both easy and semi-hard indicator values are sensitive to thresholding, yet in figure 3, only semi-hard indicator values are sensitive to thresholding.\\niv) Experimental evaluations is not fair. One cannot use a pre-defined network trained on ImageNet for training a semi-supersived training on ImageNet, STL and Cifar 100 datasets. The network already seen all the examples of ImageNet dataset during training. Fine-tuning a semi supervised learning algorithm from this pre-trained network is a clear violation of a fair testing procedure. Also why different networks are used for different datasets (e.g., why the authors do not use FlexMatch for the test reported in Table 3 although it I used for ImageNet dataset or why FixMatch is not used in Table 2?)\\nv) The authors must also report the state-of-the-art accuracies obtained the tested datasets.\", \"minor_issue\": \"there are some minor typos such as in the title of Section 3.2 that must be corrected.\", \"questions\": \"I have some questions indicated in Weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper studies the semi-supervised learning problem and divide the pseudo labels into three groups (easy, semi-hard, and hard). The authors find that the bottleneck of threshold selection lies in the sensitivity of semi-hard samples. To solve this issue, the authors propose an adaptive grouping and transporting method to align the semi-hard samples with easy samples. Experiments show that the propose method not only achieves performance improvements but also accelerates the convergence.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThis paper addresses an important problem in semi-supervised learning, i.e., robust thresholding selection.\\n2.\\tThe motivation is clearly derived from experimental findings, making it straightforward and intuitive.\\n3.\\tThe proposed method is effective and efficient according to the reported results in this paper.\", \"weaknesses\": \"1.\\tThe proposed method relies heavily on the partition of three types of samples, which may introduce the risk of non-robustness, especially when the partition is not accurate.\\n2.\\tSensitivity analyses of the hyperparameters are absent.\\n3.\\tCompared with the SOTA method SR, the speedup is not that significant.\", \"questions\": \"1.\\tOn page 2, lines 68-69, the authors claim that \\u201cGTR mitigates the threshold sensitivity by focusing on the intra-class properties\\u201d. Could you clarify which specific properties you are referring to?\\n2.\\tWhen you report the training time, do you include the time of preprocessing of dividing the pseudo labels into three groups?\\n3.\\tIn Table 4, why do you not report the results of FlexMatch+SR? I am more interested in the comparisons between GTR and SR.\\n4.\\tThere are some typos in line 285, (\\u201cChanges\\u201d should be \\u201cchanges\\u201d) and in Eq.(9) (\\u201c$ \\\\sim $\\u201d may be \\u201c$ \\\\approx$\\u201d).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes GTR for improving semi-supervised learning tasks. GTR includes grouping and transporting, where grouping tries to cluster pseudo-labels into distinct groups and transporting is used to process different groups. Experiments on benchmarks show GTR outperforms other methods, achieving better performance and speedup.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"(1) The paper is easy to follow.\\n\\n(2) Better performance and speedup are achieved in the experiments.\", \"weaknesses\": \"(1) The authors should improve the writing of this paper. Especially, the description of transporting is confusing. For example, the examples for equation (3) need more explanation.\\n\\n(2) There is not enough explanation for the reason why TTA technique is effective for selecting more reliable semi-hard pseudo-labels.\\n\\n(3) Lack of theoretical understanding for the proposed method.\\n\\n(4) More SSL methods need to be included in the experiments, such as SoftMatch and FreeMatch.\", \"questions\": \"(1) Why the example in line 216 violates the claim in line 184 that the probability is summing up to 1?\\n\\n(2) In equation (4), what is the detailed process of indicator?\\n\\n(3) How to estimate kernel density in this paper?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This manuscript delineates an approach, termed GTR, which is designed for semi-supervised learning. The paper claims that the latest methods for distinguishing low or high-quality labels require complex-designed thresholding strategies but still fail to guarantee robust and efficient selection. In addressing this issue, the GTR model exhibits an amalgamation of data grouping/clustering, optimal transportation, test-time augmentations, and consistency regularization, a combination that endows the method with superior performance in comparison to baseline methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"S1. An approach to grouping into three clusters (easy, semi-hard, and hard) is worth typing.\\n\\nS2. The evaluation is comprehensive, with comparisons to baseline methods and ablation study providing a compelling demonstration of the superior performance of the proposed method.\", \"weaknesses\": \"W1. A significant concern regarding this paper is its lack of technical innovation. The proposed method is more like a combination of existing techniques. Technical contribution is somewhat limited. Further technical insights regarding the implementation and specific algorithms within the GTR model would also be beneficial.\\n\\nW2. Those hyperparameters and thresholds somewhat degrade the model's applicability in practice. \\n\\nW3. The paper could have a further explanation of the methods TTA, threshold settings, consistency constraints, and how these methods are applied to the model, a factor which could be helpful for fully understanding the proposed method. \\n\\nW4. The empirical evidence presented in the paper lacks persuasiveness. A substantial performance boost could have justified the paper's simplicity and application-oriented nature. However, the minor improvement over baselines, as indicated primarily in experiments, does not substantiate the approach's effectiveness convincingly. In addition, no code has been released or noted in the paper.\\n\\nW5. The notation $T$ in Lines 285, 293, 295, etc. need more clarification.\", \"questions\": \"See weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This manuscript proposes a semi-supervised learning method called GTR (Grouping and Transporting for Robust Thresholding), which improves the robustness of pseudo-label selection through grouping and transporting mechanisms. Specifically, the GTR method divides the quality indicators of pseudo labels into three groups: easy, semi-hard, and hard. Then, it uses transporting mechanisms to efficiently select semi-hard samples with test-time augmentations and consistency constraints while saving the selection budgets of easy and hard samples. Finally, this manuscript elucidates how the GTR method promotes the semi-hard group towards the easy group by employing kernel density estimation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. This manuscript reveals that the obstacle of existing thresholding techniques lies in their inability to separate the semi-hard group of indicators when selecting high-quality pseudo labels.\\n2. This manuscript proposes the GTR method to obtain robust thresholding through grouping and transporting. \\n3. This manuscript employs kernel density estimation to analyze how the GTR method promotes the semi-hard group towards a better-optimized distribution, such as that of the easy group.\", \"weaknesses\": \"1. Some symbols and parameters are not provided in the context, causing difficulty in understanding.\\n2. The description of Section 3 is unclear, it is suggested to add an algorithm summary.\\n3. In the experiments, there are few comparative methods and no comparison with recent works.\", \"questions\": \"1. This method divides data into three categories based on confidence or reward indicators: easy, semi-hard, and hard. Then, multiple selection and consistency constraints are used to reduce the uncertainty of semi-hard samples and improve the accuracy of pseudo labels. As shown in Figure 1(a), the separation boundaries (yellow lines) of various classes are different. Therefore, will the proposed method lead to class imbalance due to the selection of reliable pseudo labeled data, thereby reducing the model's generalization ability.\\n2. In the transportation section, is the number of easy groups kept constant? Is the number of hard groups constantly decreasing as the high-scoring half of the data is assigned to the semi-hard group in the next iteration? Will this setting result in minimal hard data that can be ignored and increase the low scores of data in semi-hard groups? How is the final equation of unlabeled loss obtained? The description of this part is lacking.\\n3. The introduction of Equation 6 seems abrupt, authors claim that the Mahalanobis distance is used to assess the fit of pseudo-labels, with larger distances indicating lower reliability, which guides thresholding decisions. However, the threshold decision has been given in the transporting mechanism, and introducing Mahalanobis distance seems meaningless.\\n4. In the experiment, this manuscript demonstrates that the proposed method can reduce training time, but does not provide theoretical analysis.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
EjHtQlKEzV
Reassessing Layer Pruning in LLMs: New Insights and Methods
[ "Yao Lu", "Yujie Fang", "Zeyu Wang", "Hao Cheng", "Jiaheng Wei", "Dongwei Xu", "Qi Xuan", "Xiaoniu Yang", "Zhaowei Zhu" ]
Although large language models (LLMs) have achieved remarkable success across various domains, their considerable scale necessitates substantial computational resources, posing significant challenges for deployment in resource-constrained environments. Layer pruning, as a simple yet effective compression method, removes layers of a model directly, reducing computational overhead. However, what are the best practices for layer pruning in LLMs? Are sophisticated layer selection metrics truly effective? Does the LoRA (Low-Rank Approximation) family, widely regarded as a leading method for pruned model fine-tuning, truly meet expectations when applied to post-pruning fine-tuning? To answer these questions, we dedicate thousands of GPU hours to benchmarking layer pruning in LLMs and gaining insights across multiple dimensions. Our results demonstrate that a simple approach, i.e., pruning the final 25\% of layers followed by fine-tuning the \texttt{lm\_head} and the remaining last three layer, yields remarkably strong performance. Following this guide, we prune Llama-3.1-8B-It and obtain a model that outperforms many popular LLMs of similar size, such as ChatGLM2-6B, Vicuna-7B-v1.5, Qwen1.5-7B and Baichuan2-7B. We release the optimal model weights on Huggingface, and the code is available on GitHub.
[ "LLM Pruning", "Layer Pruning" ]
https://openreview.net/pdf?id=EjHtQlKEzV
https://openreview.net/forum?id=EjHtQlKEzV
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zcmyfEweH5", "x6NlIUWVkr", "vCuggC5pqr", "urcMm4fegg", "uPdXuwp6E5", "tzVpO8eYPI", "shiRmuNdK8", "pqVx3MbcGj", "nG5n3wwREJ", "iKHDWCSj1X", "a87TUGfYhM", "YvNqfSEOEi", "YTYNPZ0Jhb", "XJZeYsaagN", "PBYt3g34QC", "OeFrmAIRva", "O7spCUPPKw", "M3XDmD2msr", "KYFLtk58Kf", "JLw3nngb9r", "8hnIgVRoaG", "7j4Kcge68W", "7JNQNKUJar", "4eJ8CPKUjg", "4Kp834zDWc", "1g33v7RBLC" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732806759953, 1732808264147, 1732556473204, 1732557609543, 1730752933668, 1732558335148, 1732557381216, 1730626650755, 1732958778104, 1732654110356, 1732560676530, 1732558086443, 1732628742526, 1730401958287, 1732771476777, 1732628825167, 1732653584237, 1732620031451, 1730595484561, 1733839960636, 1732557046719, 1732808532755, 1732958709225, 1732556874594, 1732629685608, 1732822769820 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission137/Authors" ], [ "ICLR.cc/2025/Conference/Submission137/Authors" ], [ "ICLR.cc/2025/Conference/Submission137/Authors" ], [ "ICLR.cc/2025/Conference/Submission137/Authors" ], [ "ICLR.cc/2025/Conference/Submission137/Reviewer_Dtqt" ], [ "ICLR.cc/2025/Conference/Submission137/Authors" ], [ "ICLR.cc/2025/Conference/Submission137/Authors" ], [ "ICLR.cc/2025/Conference/Submission137/Reviewer_uw4p" ], [ "ICLR.cc/2025/Conference/Submission137/Area_Chair_caTz" ], [ "ICLR.cc/2025/Conference/Submission137/Reviewer_6JxQ" ], [ "ICLR.cc/2025/Conference/Submission137/Reviewer_6JxQ" ], [ "ICLR.cc/2025/Conference/Submission137/Authors" ], [ "ICLR.cc/2025/Conference/Submission137/Authors" ], [ "ICLR.cc/2025/Conference/Submission137/Reviewer_6JxQ" ], [ "ICLR.cc/2025/Conference/Submission137/Area_Chair_caTz" ], [ "ICLR.cc/2025/Conference/Submission137/Authors" ], [ "ICLR.cc/2025/Conference/Submission137/Reviewer_6JxQ" ], [ "ICLR.cc/2025/Conference/Submission137/Reviewer_uw4p" ], [ "ICLR.cc/2025/Conference/Submission137/Reviewer_MhNQ" ], [ "ICLR.cc/2025/Conference/Submission137/Authors" ], [ "ICLR.cc/2025/Conference/Submission137/Authors" ], [ "ICLR.cc/2025/Conference/Submission137/Authors" ], [ "ICLR.cc/2025/Conference/Submission137/Area_Chair_caTz" ], [ "ICLR.cc/2025/Conference/Submission137/Authors" ], [ "ICLR.cc/2025/Conference/Submission137/Authors" ], [ "ICLR.cc/2025/Conference/Submission137/Reviewer_6JxQ" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer 6JxQ (5)\", \"comment\": \"We sincerely appreciate the reviewer\\u2019s thoughtful feedback and the opportunity to further clarify the contributions and positioning of our work.\\n\\n**On the Suggested 6.3B Model.**\\nThe 6.3B model presented in our paper is selected as an illustrative example of the effectiveness of reverse-order pruning. Our choice is not intended to imply that this is the sole or universally optimal configuration. Instead, we aimed to demonstrate that the proposed method could achieve significant model compression while maintaining competitive performance.\\n\\n**Emphasizing the importance of avoiding fine-tuning.** As mentioned in our earlier response (**Response to Reviewer 6JxQ (4)**), we have conducted experiments specifically designed to evaluate the performance of pruned models without fine-tuning. In these experiments, we directly compared our Reverse-order pruning method with advanced approaches such as SLEB, which similarly avoid fine-tuning for recovery. Experiments demonstrate that the Reverse-order pruning method consistently outperforms SLEB **even without fine-tuning** on Llama-3.1-8B-It and Llama-3-8B, further highlighting its effectiveness.\\n\\n\\nWe encourage the reviewer to refer to **our earlier response** and the accompanying results table for more detailed insights.\\n\\nWe agree that pruning models must prioritize generalization. However, we respectfully disagree with the notion that \\u201cWhile other pruning methods [1] [2] [3] [4] [5] may rely on calibration samples to determine the importance of weights for pruning, they are often better at preserving the original model's performance,\\u201d How can we ensure that the model obtained by pruning on certain calibration samples can be generalized to other datasets? Besides, [6] has demonstrated that **the use of different calibration datasets can result in non-negligible variations in model performance**.\\n\\n**Comparison to those fine-grained pruning methods.** As mentioned in our earlier response (2. Focus on layer pruning rather than channel pruning or weight pruning. in **Response to Reviewer 6JxQ (3)**), reverse-order pruning is **fundamentally different** in that it focuses on coarse-grained layer pruning rather than fine-grained pruning [1] [2] [3] [4] [5] of channels or weights. Layer pruning removes entire layers, resulting in a simpler and more computationally efficient model reduction. On the other hand, channel and weight pruning target specific components within layers, requiring more granular adjustments and potentially leading to greater computational overhead.\\n\\nWe believe that **directly comparing reverse-order pruning to these finer-grained pruning methods is not entirely fair**, as they address different pruning objectives. While both approaches aim to reduce model size, layer pruning emphasizes simplicity and computational efficiency, which aligns with our goal of presenting a method that can be easily implemented in real-world scenarios with minimal computational resources. \\n\\nWe hope this clarification addresses the reviewer\\u2019s concern and better articulates the difference between the objectives and complexities of our method versus the fine-grained pruning approaches.\\n\\n[1] Fluctuation-based Adaptive Structured Pruning for Large Language Models\\n\\n[2] Search for Efficient Large Language Models\\n\\n[3] LLM-Pruner: On the Structural Pruning of Large Language Models\\n\\n[4] SliceGPT: Compress Large Language Models by Deleting Rows and Columns\\n\\n[5] A Simple and Effective Pruning Approach for Large Language Models\\n\\n[6] Williams, Miles, and Nikolaos Aletras. \\\"On the impact of calibration data in post-training quantization and pruning.\\\" Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2024.\"}", "{\"title\": \"A Kind Reminder for Reviewer MhNQ\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your valuable feedback and thorough review of our paper. Your insights have greatly contributed to refining our work. In response to the specific concerns you raised, we have provided detailed explanations to address each concern comprehensively. Below, we summarize your concerns and our key responses:\\n\\n* **[W1: similar conclusions to Insight #1 and Insight #3.]:** We differentiate our method, reverse-order pruning, from Insight #1 by **directly removing the last few layers**, rather than starting from the penultimate layer and proceeding from deep to shallow. We introduce partial-layer fine-tuning in LLM pruning, where only the remaining layers and lm_head are fine-tuned, unlike the more commonly used LoRA methods.\\nWe emphasize that fine-tuning via knowledge distillation is a full-model process, while LoRA and partial-layer fine-tuning are partial-model methods. Our findings are **complementary to Insight #3**.\\n* **[W2: the results of the pruned model prior to fine-tuning.]:** We conduct additional experiments to present the pruned model\\u2019s performance without fine-tuning, comparing our method to SLEB, a training-free layer pruning method. Our method **consistently outperforms SLEB** even without fine-tuning, which further highlights its effectiveness.\\n\\n* **[W3: Test the proposed method on the OPT model.]:** We test the reverse-order pruning method on OPT-6.7B, Llama2-7B and Llama-3.1-8B-It, which outperforms the Cos method.\\n\\nWe have incorporated your valuable suggestions into the revised manuscript. Thank you once again for your insightful feedback!\\n\\nIf our rebuttal has adequately addressed your concerns, we kindly request that you consider revising your score accordingly. An increased score is critically important to our work at this stage.\\n\\nWe remain open and glad to any additional questions or feedback you may have. Your efforts and detailed review are greatly appreciated, and we value the opportunity to improve our work based on your input. Thank you once again for your time and consideration. We look forward to your further feedback.\\n\\nBest regards,\\n\\nAuthors of Paper 137\"}", "{\"title\": \"Response to Reviewer Dtqt\", \"comment\": \"**Some of the metrics don\\u2019t have a mark to indicate whether the lower or the higher it is\\uff1a** We apologize for the lack of clarity in the original table, which might have confused the interpretation of certain metrics. To address this, we have revised the table to include explicit markers (e.g., \\u2191 or \\u2193) to indicate whether a higher or lower value is better for each metric. We sincerely thank the reviewer for pointing this out, as it has helped improve the clarity and presentation of our paper.\"}", "{\"title\": \"Response to Reviewer MhNQ (2)\", \"comment\": \"**Test the proposed method on the OPT model.** Thank you for this valuable suggestion. Following your suggestion, we conduct experiments on OPT-6.7B and Llama2-7B using reverse-order pruning and compare it with [4]. As suggested by the cosine similarity analysis in Figure 2 of [4], OPT-6.7B exhibits high redundancy in shallow layers and Llama2-7B exhibits high redundancy in deep layers. Therefore, we reproduce the method proposed in [4] (Method: Cos) by pruning 2-9th layers for OPT-6.7B, the 22-29th layers for Llama2-7B and fine-tune the pruned models with LoRA on Alpaca-cleaned dataset. For our method, we just prune the last 8 layers in OPT-6.7B and Llama2-7B, freeze the other layers and fine-tune only the last 3 layers and lm_head on Alpaca-cleaned dataset.\\n\\nFrom the comparisons on OPT and Llama2-7B, we find pruning the shallow layers of the OPT model is overall better than reverse-order pruning, while the performance improvement is not always consistent as shown on MMLU, CMMLU, and WinoGrande where Cos works slightly worse than reverse-order. We believe the metric in [4] did reveal the redundancy of shallow layers in OPT but the detailed layer index may not always be accurate. For example, [4] suggests pruning the 22-29th layers while experiments in the table below shows pruning the last 8 layers is overall better.\\n\\nTo further demonstrate our effectiveness, we conduct experiments on Llama-3.1-8B-It. As shown in the table, our pruned model achieves an average accuracy of 0.5807, far exceeding the cos method 0.2633. We has cited the paper and added the discussion in our revised version.\\n\\n| Model | Method | PIQA | HellaSwag | OpenbookQA | ARC-e | ARC-c | MMLU | CMMLU | WinoGrande | Avg Acc |\\n|:-----------:|:----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|\\n| Llama2-7B | Reverse-order | 0.7089\\u00b10.0106 | 0.4875\\u00b10.0050 | 0.3020\\u00b10.0206 | 0.6317\\u00b10.0099 | 0.3780\\u00b10.0142 | 0.2789\\u00b10.0038 | 0.2590\\u00b10.0041 | 0.6251\\u00b10.0136 | 0.4589 |\\n| Llama2-7B | Cos | 0.7301\\u00b10.0104 | 0.4904\\u00b10.0050 | 0.2640\\u00b10.0197 | 0.6334\\u00b10.0099 | 0.3311\\u00b10.0138 | 0.2599\\u00b10.0037 | 0.2478\\u00b10.0040 | 0.6748\\u00b10.0132 | 0.4242 |\\n| OPT-6.7B | Reverse-order | 0.6893\\u00b10.0108 | 0.4068\\u00b10.0049 | 0.2180\\u00b10.0185 | 0.4949\\u00b10.0103 | 0.2730\\u00b10.0130 | 0.2469\\u00b10.0036 | 0.2526\\u00b10.0040 | 0.6101\\u00b10.0137 | 0.3717 |\\n| OPT-6.7B | Cos | 0.7209\\u00b10.0105 | 0.4439\\u00b10.0050 | 0.2500\\u00b10.0194 | 0.5795\\u00b10.0101 | 0.2969\\u00b10.0134 | 0.2388\\u00b10.0036 | 0.2500\\u00b10.0040 | 0.5888\\u00b10.0138 | 0.4211 |\\n| Llama-3.1-It | Reverse-order | 0.7383\\u00b10.0103 | 0.5323\\u00b10.0050 | 0.3080\\u00b10.0207 | 0.7260\\u00b10.0092 | 0.4684\\u00b10.0146 | 0.6567\\u00b10.0038 | 0.5515\\u00b10.0045 | 0.6646\\u00b10.0133 | 0.5807 | \\n| Llama-3.1-It | Cos | 0.5773\\u00b10.0115 | 0.2878\\u00b10.0045 | 0.1520\\u00b10.0161 | 0.3674\\u00b10.0099 | 0.1706\\u00b10.0110 | 0.2342\\u00b10.0036 | 0.2466\\u00b10.0040 | 0.5036\\u00b10.0141 | 0.3174 |\\n\\n[1] Gromov, Andrey, et al. \\\"The unreasonable ineffectiveness of the deeper layers.\\\" arXiv preprint arXiv:2403.17887 (2024).\\n\\n[2] Muralidharan, Saurav, et al. \\\"Compact language models via pruning and knowledge distillation.\\\" The Thirty-eighth Annual Conference on Neural Information Processing Systems. 2024.\\n\\n[3] Song, Jiwon, et al. \\\"SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks.\\\" arXiv preprint arXiv:2402.09025 (2024).\\n\\n[4] Chen, Xiaodong, Yuxuan Hu, and Jing Zhang. \\\"Compressing large language models by streamlining the unimportant layer.\\\" arXiv preprint arXiv:2403.19135\"}", "{\"summary\": \"In this paper, the author spent thousands of GPU hours to reassess the practices and insights of layer pruning in LLMs. The results showed that reverse-order pruning is simple yet effective (simply pruning the last several layers performs better than many complex pruning metrics); partial-layer fine-tuning (freezing the other layers and fine-tuning only the last few remaining layers and lm_head) can achieve higher accuracy than LoRA fine-tuning; one-shot pruning in more beneficial than iterative fine-tuning considering both training costs and performance gains.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The structure of the paper is well-designed and organized\", \"The background information is rich, especially those math equations, which makes it easy for someone who is not familiar with this field to understand the concept of layer pruning and relevant techniques.\", \"Because the paper is an experiments-based publication, it is very good there are lots of diverse experiments conducted in the paper, including plenty of different datasets and models.\", \"The results of the experiment are very rich in graphs and tables, and the results are clear briefly.\"], \"weaknesses\": [\"Some of the metrics don\\u2019t have a mark to indicate whether the lower or the higher it is, the performance is better, especially some uncommon metrics.\"], \"questions\": \"Please refer to the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 6JxQ (2)\", \"comment\": \"**The ablation study with a different number of samples used in sft.** To answer your question, we conduct further experiments on the number of samples used in SFT. Specifically, we use 20%, 40%, 60%, 80% and 100% of the alpaca-cleaned dataset for partial fine-tuning. As shown in the table below, we find that the number of samples used in SFT indeed affects the performance of the pruned model. When there is only 20% of the dataset, the performance of the model declines significantly.\\n\\n| Data Quantity | PIQA | HellaSwag | OpenbookQA | ARC-e | ARC-c | MMLU | CMMLU | WinoGrande | Avg Acc |\\n|:------:|:--------------:|:--------------:|:--------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------:|\\n| 100% | 0.7383\\u00b10.0103 | 0.5323\\u00b10.0050 | 0.3080\\u00b10.0207 | 0.7260\\u00b10.0092 | 0.4684\\u00b10.0146 | 0.6567\\u00b10.0038 | 0.5515\\u00b10.0045 | 0.6646\\u00b10.0133 | 0.5807 |\\n| 80% | 0.7372\\u00b10.0103 | 0.5279\\u00b10.0050 | 0.3100\\u00b10.0207 | 0.7235\\u00b10.0092 | 0.4565\\u00b10.0146 | 0.6515\\u00b10.0038 | 0.5477\\u00b10.0045 | 0.6567\\u00b10.0133 | 0.5764 |\\n| 60% | 0.7399\\u00b10.0102 | 0.5242\\u00b10.0050 | 0.3140\\u00b10.0208 | 0.7100\\u00b10.0093 | 0.4497\\u00b10.0145 | 0.6551\\u00b10.0038 | 0.5487\\u00b10.0045 | 0.6582\\u00b10.0133 | 0.5747 |\\n| 40% | 0.7399\\u00b10.0102 | 0.5194\\u00b10.0050 | 0.3060\\u00b10.0206 | 0.7020\\u00b10.0094 | 0.4548\\u00b10.0146 | 0.6540\\u00b10.0038 | 0.5531\\u00b10.0045 | 0.6630\\u00b10.0133 | 0.5740 |\\n| 20% | 0.7383\\u00b10.0103 | 0.5077\\u00b10.0050 | 0.2980\\u00b10.0205 | 0.6860\\u00b10.0095 | 0.4360\\u00b10.0145 | 0.6455\\u00b10.0038 | 0.5458\\u00b10.0045 | 0.6590\\u00b10.0133 | 0.5083 |\\n\\n**What is the experiment setup for other layer pruning methods.** As we mentioned in line 207 and 208 of our initial submission, we utilize the Alpaca-cleaned dataset with LoRA to recover the performance for other layer pruning methods. As mentioned in lines 259 and 260, we use LoRA with a rank d of 8 and a batch size of 64, and the AdamW optimizer. The learning rate is set to 1\\u00d710\\u22125 with 100 warming steps. We followed the guidelines of [2] and divided 2000 samples from the Alpaca dataset into a validation set, with the remaining samples used for training. \\n\\nTo ensure fairness, the training sets are kept consistent across all pruning methods. In Table 8, LLaMA-3.1-6.3B-It-Alpaca is obtained by pruning 8 layers from LLaMA-3.1-8B-It using the Reverse Order metric, fine-tuning only the last three layers and the lm_head on the Alpaca-cleaned dataset, while freezing the remaining layers. ShortGPT (BI), Shortened LLaMA (PPL) and Shortened LLaMA (Taylor) are fine-tuned on the same dataset with LoRA. As shown in the Table 8 of our initial submission, Llama-3.1-6.3B-It-Alpaca are nearly 18% better than ShortGPT and 10%+ better than Shortened LLaMA, which demonstrates the effectiveness of our method. \\n\\n[1] Song, Jiwon, et al. \\\"SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks.\\\" arXiv preprint arXiv:2402.09025 (2024).\\n\\n[2] Ma, Xinyin, Gongfan Fang, and Xinchao Wang. \\\"Llm-pruner: On the structural pruning of large language models.\\\" Advances in neural information processing systems 36 (2023): 21702-21720.\"}", "{\"title\": \"Response to Reviewer MhNQ\", \"comment\": \"**Similar conclusions to Insight #1 and Insight #3.**\\n\\nFor Insight #1, authors in [1] find that removing layers beginning at the penultimate layer and proceeding from deep to shallow until the desired number of layers have been removed is effective. In contrast, reverse-order pruning directly cuts off the last few layers without retaining the last layer, which is different from [1]. Besides, we propose to use partial-layer fine-tuning in LLMs rather than the popular LoRA methods, i.e., freezing the other layers and fine-tuning only the last few remaining layers and lm_head. To the best of our knowledge, we are the first one demonstrating the effectiveness of partial-layer fine-tuning in LLM pruning.\\n\\nFor Insight #3\\uff1aThank you for your feedback. We acknowledge the insights from [2], and indeed, similar conclusions about iterative pruning provide no added benefit. However, we would like to clarify that the context of our work differs from that of [2]. While [2] demonstrates that iterative pruning is ineffective when using knowledge distillation to retrain the pruned model, our experiments focus on performance recovery of iterative pruning using partial layer fine-tuning and LoRA. As mentioned in Table 5 of our initial submission, Iterative pruning offers no benefit when using LoRA or partial-layer fine-tuning to restore the pruned model performance. It is worth noting that fine-tuning using knowledge distillation is a full-model fine-tuning method, while the LoRA and partial-layer fine-tuning we use are partial-model fine-tuning methods. Therefore, our conclusions are orthogonal to those of [2] and complement each other.\\n\\n**Present the results of the pruned model prior to fine-tuning.**\\nThank you for this insightful suggestion. We conduct additional experiments and compare our method with SLEB [3], which is an advanced training-free layer pruning method (mentioned by Reviewer uw4p) on the Table below. The result demonstrates that our proposed method consistently outperforms SLEB even without fine-tuning, further highlighting its effectiveness. \\nWe appreciate your suggestion, as it has helped enhance the comprehensiveness of our evaluation.\\n\\n| Model | Method | PIQA | HellaSwag | OpenbookQA | ARC-e | ARC-c | MMLU | CMMLU | WinoGrande | Avg Acc |\\n|:------------------------:|:------------------------:|:----------:|:----------:|:----------:|:---------:|:---------:|:---------:|:---------:|:----------:|:-------:|\\n| Llama-3.1-8B-It | SLEB | 0.7252\\u00b10.0104 | 0.4415\\u00b10.0050 | 0.2380\\u00b10.0191 | 0.6423\\u00b10.0098 | 0.3166\\u00b10.0136 | 0.3396\\u00b10.0040 | 0.2756\\u00b10.0042 | 0.5888\\u00b10.0138 | 0.4192 |\\n| Llama-3.1-8B-It | Reverse-order | 0.7002\\u00b10.0107 | 0.4021\\u00b10.0049 | 0.2920\\u00b10.0204 | 0.6178\\u00b10.0100 | 0.3993\\u00b10.0143 | 0.6346\\u00b10.0039 | 0.5458\\u00b10.0045 | 0.6251\\u00b10.0136 | 0.5271 |\\n| Llama-3-8B | SLEB | 0.7111\\u00b10.0106 | 0.4401\\u00b10.0050 | 0.2280\\u00b10.0188 | 0.6014\\u00b10.0100 | 0.2807\\u00b10.0131 | 0.2674\\u00b10.0037 | 0.2502\\u00b10.0040 | 0.5683\\u00b10.0139 | 0.3689 |\\n| Llama-3-8B | Reverse-order | 0.6921\\u00b10.0108 | 0.4035\\u00b10.0049 | 0.3040\\u00b10.0206 | 0.6014\\u00b10.0100 | 0.3720\\u00b10.0141 | 0.5603\\u00b10.0040 | 0.4216\\u00b10.0045 | 0.5975\\u00b10.0138 | 0.4940 |\"}", "{\"summary\": \"This paper explores layer pruning in Large Language Models (LLMs) to reduce computational overhead while maintaining performance. The authors conduct extensive experiments across multiple dimensions, including different layer selection metrics, fine-tuning methods, and pruning strategies. Their findings suggest that a simple reverse-order pruning strategy\\u2014pruning the last 25% of layers\\u2014performs as well as more sophisticated methods. Applying these insights, they prune Llama-3.1-8B-Instruct to create Llama-3.1-6.3B-It models, which outperform several popular LLMs of similar size.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper conducts lots of empirical study to support their findings, which may be beneficial for the community for future research.\\n\\n2. By releasing the pruned model weights and code, the authors contribute to open science and facilitate reproducibility and further research in the field.\", \"weaknesses\": \"1. The main findings emphasize that simple methods can be highly effective. While valuable, this insight may be seen as incremental, confirming existing intuitions rather than introducing new methodologies. The effectiveness of pruning the last layers and fine-tuning only specific parts of the model aligns with established practices in model compression and transfer learning.\\n\\n2. The study compares simple pruning metrics with Block Influence (BI) from ShortGPT [1] but does not include comparisons with more recent and advanced layer pruning methods such as SLEB [2] and FinerCut [3]. Both SLEB and FinerCut have introduced innovative approaches to layer pruning in LLMs, offering potentially significant improvements in efficiency and performance.\\n\\n3. Although the paper finds that reverse-order pruning (pruning the last several layers) is effective, it does not delve into why this method outperforms other metrics. An analysis of the role and importance of the last layers in LLMs could provide valuable insights and contribute to the development of more effective pruning strategies.\\n\\n[1] Men, Xin, et al. \\\"Shortgpt: Layers in large language models are more redundant than you expect.\\\" arXiv preprint arXiv:2403.03853 (2024).\\n\\n[2] Song, Jiwon, et al. \\\"SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks.\\\" arXiv preprint arXiv:2402.09025 (2024).\\n\\n[3] Zhang, Yang, et al. \\\"FinerCut: Finer-grained Interpretable Layer Pruning for Large Language Models.\\\" arXiv preprint arXiv:2405.18218 (2024).\", \"questions\": \"1. Despite the authors' efforts to provide code and models, some experimental details are insufficiently specified. For example, exact hyperparameters for all experiments, detailed configurations of the fine-tuning setup, and the procedures for selecting and processing calibration samples for the data-driven pruning metrics are not fully described. Can the authors give more details?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nCould you kindly respond and indicate whether authors have addressed your concerns?\\n\\nThanks, AC\"}", "{\"title\": \"Follow up (2)\", \"comment\": \"4. I am not sure what is the sparsity of the provided results, if it is still the 6.3B model (pruned from 8B LLaMA-3 with 62.99% average accuracy), I think the results are not that good as the total sparsity is only around 22%. While for 20% structured pruning, as shown in Table 2 in work [1], the Wanda, FLAP and LLM-Pruner do not decrease much in average accuracy on zero-shot datasets.\\n\\n\\n[1] Fluctuation-based Adaptive Structured Pruning for Large Language Models\\n\\n[2] Search for Efficient Large Language Models\\n\\n[3] LLM-Pruner: On the Structural Pruning of Large Language Models\\n\\n[4] SliceGPT: Compress Large Language Models by Deleting Rows and Columns\\n\\n[5] A Simple and Effective Pruning Approach for Large Language Models\"}", "{\"comment\": \"Hi, thanks for your rebuttal and the new experiments for large model (LLaMA-3 70B) and the ablation for number of sft samples.\\n\\nThe main concern of this paper still remains in the novelty and the contribution. Authors try to compare the reserve pruning with other pruning methods to show the effectiveness of the reserve pruning. \\n\\nHowever, the work [1] shows the method of structured pruning globally, and the work [2] also shows the global optimal small dense results. Thus, I am curious about that if the reserve layer pruning can achieve the global optimal pruning results? As there might be some redundancy in the front layers as shown by Figure 3 in work [1] and Figure 3 in work [3]. Could you explain for the global optimum or compare with the works [1] [2]?\\n\\nMeanwhile, works [1] [2] [4] [5] do not include the fine-tuning (backward) for further performance improvement. While this work adopts Alpaca dataset for the fine-tune, which shows limited generalization (i.e., rely on fine-tune dataset for performance recover). Other works [1] [2] [4] can recover with only WikiText2 dataset and work [5] even does not require calibration. Even the search effort and calibration effort of work [2] remain to the similar as the work [3], thus I think adopt Alpaca for fine-tuning is not that kind of necessary. In my opinion, it is a better way for us to adopt the non-backward method for LLMs to recover model performance as it can be generalized to the extra large models to save the resources. By the way, the works [1] [2] [4] [5] can all generate a smaller model, and are compatible to the further fine-tuning, and I may believe that these works may achieve comparable results as this work after fine-tuning.\\n\\n[1] Fluctuation-based Adaptive Structured Pruning for Large Language Models\\n\\n[2] Search for Efficient Large Language Models\\n\\n[3] LLM-Pruner: On the Structural Pruning of Large Language Models\\n\\n[4] SliceGPT: Compress Large Language Models by Deleting Rows and Columns\\n\\n[5] A Simple and Effective Pruning Approach for Large Language Models\\n\\nI am really respect to the authors' contribution to this paper and the rebuttal, but my concern to this paper's novelty still remains. Therefore, currently, I may retain my score. \\n\\nI am ready for further discussion.\"}", "{\"title\": \"Response to Reviewer 6JxQ\", \"comment\": \"**Most works have done in this paper are kind of ablation study.** We appreciate the reviewer's feedback regarding the novelty of our work. While we respectfully disagree with the characterization of our work as \\\"ablation studies,\\\" as our research makes two significant contributions to the field:\\n\\n* Through systematic exploration, we discovered that 'Reverse-order' pruning consistently outperforms more complex pruning methods across diverse datasets and models. This finding is particularly impactful as it establishes a strong, reproducible baseline for LLM layer pruning - demonstrating that simpler approaches can be more effective than complicated pruning strategies. This challenges the common assumption that more sophisticated pruning methods are necessarily better.\\n\\n* Our second key finding - that fine-tuning only the last few remaining layers and lm_head outperforms popular methods like LoRA - represents a paradigm shift in LLM pruning optimization. This discovery provides practical benefits for efficient model adaptation.\\n\\nOur work is, to the best of our knowledge, the first to demonstrate these phenomena in LLM pruning, offering significant practical value through both the simple yet effective reverse-order pruning strategy and the empirical insights into the effectiveness of partial fine-tuning. These findings provide clear, actionable guidelines for practitioners working on LLM pruning while challenging common assumptions about the necessity of complex pruning methods and the optimality of LoRA-based approaches.\\n\\nThe simplicity of our proposed methods is actually a strength, as it enables broader adoption and reproducibility while achieving superior results compared to more complex approaches. We believe these contributions advance the field's understanding of LLM pruning and provide practical tools for researchers and practitioners.\\n\\n**Do not include the large models.** To address this, we have conducted experiments on the LLaMA-3 70B model. However, due to limited time and computational resources, we are unable to perform fine-tuning experiments on this model and have focused instead on evaluating the performance of the pruned models without fine-tuning. In order to verify the effectiveness of the reverse-order pruning, we compare it with an advanced training-free layer pruning method SLEB [1] (mentioned by Reviewer uw4p). We prune 16 layers with these two methods. As shown in the table, we find that reverse-order pruning and SLEB have similar average performance, which demonstrates the effectiveness of reverse-order pruning. It is important to note that SLEB requires evaluating the importance of each layer, removing the least important layer from the current model, and then iterating through this verification process. This iterative approach of SLBE is time-consuming and takes up a lot of memory for LLaMA-3 70B. In contrast, reverse-order pruning is simple but effective.\\n\\n| Method | PIQA | HellaSwag | OpenbookQA | ARC-e | ARC-c | MMLU | CMMLU | WinoGrande | Avg Acc | \\n|:---------:|:--------------:|:--------------:|:--------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------:|\\n | SLEB | 0.7916\\u00b10.0095 | 0.5805\\u00b10.0049 | 0.3360\\u00b10.0211 | 0.8005\\u00b10.0082 | 0.4974\\u00b10.0146 | 0.5604\\u00b10.0040 | 0.4125\\u00b10.0045 | 0.7238\\u00b10.0126 | 0.5878 | \\n| Reverse-order | 0.7231\\u00b10.0104 | 0.4461\\u00b10.0050 | 0.3280\\u00b10.0210 | 0.6561\\u00b10.0097 | 0.4616\\u00b10.0146 | 0.7473\\u00b10.0034 | 0.6057\\u00b10.0044 | 0.6961\\u00b10.0129 | 0.5830 |\\n | Original | 0.8243\\u00b10.0089 | 0.6636\\u00b10.0047 | 0.3800\\u00b10.0217 | 0.8683\\u00b10.0069 | 0.6032\\u00b10.0143 | 0.7519\\u00b10.0033 | 0.6667\\u00b10.0042 | 0.8058\\u00b10.0111 | 0.6958 |\\n\\n**The generation speed compared to other models with similar model size included in Table 8.** To address this, we have measured the generation speed of other models with similar model size. As mentioned in line 522-524, the latency is tested under the test set of WikiText2 on a single NVIDIA RTX A100GPU. As shown in the table below, our pruned models are significantly faster than existing models with similar model size.\\n| Model | Latency |\\n|:-------------------------------:|:--------:|\\n| Llama-3.1-6.3B-It-Alpaca,Llama-3.1-6.3B-Dolly | 210.35s |\\n| Vicuna-7B | 286.42s |\\n| Qwen1.5-7B | 270.48s |\\n| LLAMA3-8B | 277.24s |\\n| Baichuan2-7B | 324.99s |\\n| Llama-3.1-8B-Instruct | 311.40s |\"}", "{\"title\": \"Response to Reviewer 6JxQ (3)\", \"comment\": \"We sincerely appreciate the reviewer\\u2019s thoughtful feedback and the opportunity to further clarify the contributions and positioning of our work. Below, we address the concerns regarding novelty and methodology, and we hope to provide additional insights for discussion.\\n\\n**1. Different angle and scope of our contribution.**\\n\\nWhile we acknowledge the effectiveness of global search methods for optimizing LLM structures, as highlighted in works [1] and [2], our paper approaches the problem from a different angle. Specifically, we focus on simplicity and practicality by proposing reverse-order pruning, which prioritizes ease of implementation and computational efficiency over theoretical guarantees of optimality.\\n\\nWe intentionally did not claim global optimality for reverse-order pruning or any resulting model structure, as we recognize that such guarantees would require a rigorous theoretical framework beyond the scope of this paper, particularly given the complexity of LLMs. Instead, we emphasize in Insight #1 on page 2 that reverse-order pruning is a **simple yet effective** method. Its effectiveness has been validated through robust numerical results presented in the main paper and further substantiated during the rebuttal phase with experiments on LLaMA-3 70B and the ablation study on the number of SFT samples.\\n\\n**2. Focus on layer pruning rather than channel pruning or weight pruning.**\\n\\nWe first give the definitions of layer pruning, channel pruning, and weight pruning.\\n* Layer pruning removes entire layers from the model, resulting in a simpler and coarser-grained reduction. \\n* Channel pruning removes specific channels (or neurons) from individual layers, resulting in a finer-grained reduction in the model's width.\\n* Weight pruning removes individual weights (connections) within layers. It is the most granular form of pruning, targeting the least important connections.\\n\\nOur work is primarily concerned with layer pruning, driven by the motivation to develop a coarse-grained yet effective approach. It is worth noting that the reverse-order pruning method can complete the pruning without additional information. In contrast, the cited works utilize **channel pruning** [1,2,3] and **weight pruning** [4,5], which typically requires fine-grained adjustments and additional computational effort to prune at a more granular level (e.g., weights or channels).\\n\\nWe find that comparing layer pruning directly to channel pruning or weight pruning methods is not entirely fair, as they address different pruning objectives. Layer pruning offers the advantage of simplicity and computational efficiency. This practical focus aligns with our goal of presenting an easy-to-implement method that can be readily applied in real-world scenarios.\\n\\n**3. There might be some redundancy in the front layers.** \\n\\nThe metric used in Figure 3 of work [1] to identify redundancy in the front layers is based on magnitude, which is also a baseline metric we evaluate in our paper (in line 176 of page 4). As shown in Table 1 of our initial submission, we demonstrate that the Reverse-order pruning outperforms magnitude-based approaches (both Magnitude-l1 and Magnitude-l2). This indicates that while [1] identifies redundancy using magnitude, our proposed method provides a more effective and reliable metric for achieving better performance after pruning.\\n\\n[1] Fluctuation-based Adaptive Structured Pruning for Large Language Models\\n\\n[2] Search for Efficient Large Language Models\\n\\n[3] LLM-Pruner: On the Structural Pruning of Large Language Models\\n\\n[4] SliceGPT: Compress Large Language Models by Deleting Rows and Columns\\n\\n[5] A Simple and Effective Pruning Approach for Large Language Models\"}", "{\"summary\": \"This paper mainly focuses on the ablation study of layer pruning in LLMs.\\nThe paper first explores the different layer pruning strategies with different fine-tuning methods.\\nThen, they find that the reverse-order is the optimal layer pruning strategy. \\nMeanwhile. they find that the partial-layer fine-tuning outperforms LoRA-based techniques.\\nFinally, they release two models directly pruned from Llama-3.1-8B-Instruct, which outperforms other popular models with similar sizes.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The ablation study about layer pruning and fine-tuning in this paper seems to be good.\\n2. The paper finds that the partial-layer fine-tuning outperforms LoRA-based techniques, which is important to the post-pruning fine-tuning research areas.\", \"weaknesses\": \"1. The novelty of this paper is limited. Most works have done in this paper are kind of ablation study. The paper does not propose any new method, the paper only provides the findings after the ablation study with its comprehensive benchmarking. The method of pruning layers in 'Reverse-order' is only the findings obtained from the ablation study compared to other methods, which is not a novel method. Meanwhile, the ablation of layer pruning methods is only conducted with models with around 7B or less parameters, which shows limited generalization to larger models.\\n2. The ablation study for layer pruning in Table 1 2 5 8 does not include the large models, for example LLaMA-2 30B, LLaMA-2 70B and LLaMA-3 80B, thus the generalization of this method on larger models is limited. And so does the ablation of fine-tuning. As the model becomes larger, the redundancy of the model becomes larger, which is more important to show the pruning results with large models, especially 70B or 80B models.\\n3. According to Table 7, it shows that the fine-tuning dataset is sensitive to the model performance, which hurts the generalization of this method. The work does not discuss the calibration dataset used for those other pruning methods, which results in the bias of the results. Meanwhile, the paper does not include the ablation study with different number of samples used in sft.\", \"questions\": \"1. How about the performance of this method when applied to large LLMs including LLaMA-2 30B, LLaMA-2 70B and LLaMA-3 80B? As it is intuitive to apply pruning techniques (especially layer pruning methods) on larger models (especially 70B or 80B) models, because there are much more redundancy compared to 7B model family.\\n2. How about the generation speed compared to other models with similar model size included in Table 8?\\n3. How about the ablation study with different number of training samples in sft?\\n4. What is the experiment setup for other layer pruning methods? especially, what is the number of samples for the calibration?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reviewers, please kindly respond\", \"comment\": \"Dear Reviewers,\\n\\nIf you have not responded to author's rebuttal, please kindly do so as soon as possible. The deadline is Dec 2, but the authors can potentially further clarify questions if you respond earlier. Thanks!\\n\\nBest, AC\"}", "{\"title\": \"Response to Reviewer 6JxQ (4)\", \"comment\": \"**4. It is a better way for us to adopt the non-backward method for LLMs to recover model performance.**\\n\\nWe agree with your suggestion that adopting non-backward methods for recovering model performance can be a promising direction, particularly as they generalize well to extremely large models while saving computational resources. In line with this perspective, we have conducted experiments to evaluate the performance of pruned models **without fine-tuning** and directly compared these results with existing methods such as SLEB, which also do not rely on fine-tuning for recovery (mentioned by Reviewer uw4p). Specifically, we conduct additional experiments on Llama-3.1-8B-It and Llama-3-8B and compare our method with SLEB. As shown in the Table below, the Reverse-order method consistently outperforms SLEB even without fine-tuning, further highlighting its effectiveness. \\n\\n| Model | Method | PIQA | HellaSwag | OpenbookQA | ARC-e | ARC-c | MMLU | CMMLU | WinoGrande | Avg Acc |\\n|:------------------------:|:------------------------:|:----------:|:----------:|:----------:|:---------:|:---------:|:---------:|:---------:|:----------:|:-------:|\\n| Llama-3.1-8B-It | SLEB | 0.7252\\u00b10.0104 | 0.4415\\u00b10.0050 | 0.2380\\u00b10.0191 | 0.6423\\u00b10.0098 | 0.3166\\u00b10.0136 | 0.3396\\u00b10.0040 | 0.2756\\u00b10.0042 | 0.5888\\u00b10.0138 | 0.4192 |\\n| Llama-3.1-8B-It | Reverse-order | 0.7002\\u00b10.0107 | 0.4021\\u00b10.0049 | 0.2920\\u00b10.0204 | 0.6178\\u00b10.0100 | 0.3993\\u00b10.0143 | 0.6346\\u00b10.0039 | 0.5458\\u00b10.0045 | 0.6251\\u00b10.0136 | 0.5271 |\\n| Llama-3-8B | SLEB | 0.7111\\u00b10.0106 | 0.4401\\u00b10.0050 | 0.2280\\u00b10.0188 | 0.6014\\u00b10.0100 | 0.2807\\u00b10.0131 | 0.2674\\u00b10.0037 | 0.2502\\u00b10.0040 | 0.5683\\u00b10.0139 | 0.3689 |\\n| Llama-3-8B | Reverse-order | 0.6921\\u00b10.0108 | 0.4035\\u00b10.0049 | 0.3040\\u00b10.0206 | 0.6014\\u00b10.0100 | 0.3720\\u00b10.0141 | 0.5603\\u00b10.0040 | 0.4216\\u00b10.0045 | 0.5975\\u00b10.0138 | 0.4940 |\"}", "{\"title\": \"Follow up (1)\", \"comment\": \"Hi, thanks for your reply and explanation.\\n\\n1. The reason I reference works [1] and [2] is to suggest that you take the global optimum into account. Given the relatively small search space involved, especially in the early layers, this consideration is crucial. For instance, in models with 7B or 8B parameters, there are only 32 layers. If the importance of each of these 32 layers can be thoroughly examined, it would allow for the creation of smaller models with a broader range of sparse ratios, enabling greater flexibility and efficiency. However, in your paper, there is only 6.3B models you suggested as the optimal one.\\n\\n2.a. I disagree with this point. While other pruning methods [1] [2] [3] [4] [5] may rely on calibration samples to determine the importance of weights for pruning, they are often better at preserving the original model's performance, which stems from the extensive and powerful pretraining dataset. Further fine-tuning, on the other hand, can potentially compromise the model's generalization ability. Therefore, I believe models that avoid additional fine-tuning are more likely to achieve superior generalization compared to those that incorporate it.\\n\\n2.b. As for the comparison to those fine-grained pruning methods [1] [2] [3] [4] [5], they can also generate the small dense models which are practical in real-world scenarios. And their methods are general and can be applied to other LLMs easily as the cost of those methods are not that much especially when compared to the fine-tuning on Alpaca.\\n\\n2.c. According to the Table 2 in FLAP, with 20% structured sparsity, FLAP achieves a little drop on zero-shot performance, and Wanda (which does not require weight update after pruning) even achieves better performance than dense model. On the other hand, according to the non-finetuning results and the ablation study for number of fine-tuning samples, your layer pruning method does not achieve good results and is sensitive to the number of fine-tuning samples.\\n\\n3.a. I am afraid you misunderstand the Figure 3 of work [1], the figure shows the **magnitude of the metric** proposed by this work. rather than the magnitude of the weights with $l1$ or $l2$ norm. You can check how the metric is defined in Equation 5 of work [1]. \\n\\n3.b. Also, according to Figure 3 in the work [3] which shows the layer sensitivity for pruning, it shows that the middle layers are redundant while the latest layers are sensitive.\\n\\n\\n\\n[1] Fluctuation-based Adaptive Structured Pruning for Large Language Models\\n\\n[2] Search for Efficient Large Language Models\\n\\n[3] LLM-Pruner: On the Structural Pruning of Large Language Models\\n\\n[4] SliceGPT: Compress Large Language Models by Deleting Rows and Columns\\n\\n[5] A Simple and Effective Pruning Approach for Large Language Models\"}", "{\"comment\": \"Thank you for the authors' response and the additional experiments provided. While the results demonstrate effectiveness, I remain concerned about the novelty and contribution of this work. Specifically, the paper appears to lack a clear and substantial contribution that could guide or inspire future research in the field.\\n\\nThe paper is like \\\"Shooting in the dark and pretending you aimed\\\", where choosing the last layer and fine-tuning seems more like an empirical finding after extensive experimentation rather than a method driven by a well-defined motivation or theoretical insight. This raises concerns about the underlying rationale behind the proposed method.\\n\\nIn research, we typically expect a strong contribution from one of two scenarios: (1) identifying a compelling problem or motivation and presenting effective solutions, or (2) uncovering an unusual or unexpected observation and conducting an in-depth analysis to understand its underlying causes. For this paper, the latter seems more applicable, yet the analysis provided does not feel sufficiently mature or comprehensive to support this claim.\\n\\nAdditionally, I share Reviewer 6JxQ's concerns regarding the lack of clarity surrounding how the proposed method works. Furthermore, the reliance on specific datasets for fine-tuning appears to play a significant role in the observed accuracy improvements, which raises questions about the generalizability of the approach.\"}", "{\"summary\": \"In this work, the authors conduct comprehensive empirical study on layer-wise post-training pruning across various LLMs. Specifically, they present three key conclusion: (1) reverse-order layer pruning outperforms other layer-wise pruning importance metrics, (2) fine-tuning the last few remaining layers yields better performance than LoRA, and (3) iterative layer pruning shows no advantage over one-shot layer pruning. Based on these analysis, the authors develop pruned models using Llama-3.1-Instruct, achieving better performance compared to other LLMs of the same or larger size.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well organized and easy to follow.\\n2. It\\u2018s inspired to see fine-tuning last few remaining layers (e.g., 3) can outperform LoRA. \\n3. The pruned model based on LLama-3.1-Instruct shows better performance compared prior LLMs with similar model size.\", \"weaknesses\": \"1. Similar conclusions to Insight #1 and Insight #3 have been noted in prior work. Specifically, [1] demonstrates that deeper layers are less effective, so it would be helpful to clarify how this work differs from [1]. Additionally, as the authors mentioned, [2] shows that iterative pruning provides no added benefit.\\n\\n2. The authors only present the results of the pruned model after fine-tuning. It would be informative to see the results prior to fine-tuning to see if the proposed method consistently outperforms others.\\n\\n3. It would also be valuable to test the proposed method on the OPT model. As revealed in [3], unlike other LLMs, OPT models exhibit high redundancy in shallow layers rather than in deeper layers by using cosine similarity analysis.\\n\\n[1] Gromov, Andrey, et al. \\\"The unreasonable ineffectiveness of the deeper layers.\\\" arXiv preprint arXiv:2403.17887 (2024).\\n\\n[2] Compact language models via pruning and knowledge distillation. arXiv preprint arXiv:2407.14679, 2024 \\n\\n[3] Chen, Xiaodong, Yuxuan Hu, and Jing Zhang. \\\"Compressing large language models by streamlining the unimportant layer.\\\" arXiv preprint arXiv:2403.19135\", \"questions\": \"Overall, I find this work valuable for offering new insights into post-training layer-wise pruning, particularly with Insight #2. However, the work could be strengthened by addressing the questions as shown in Weakness above: (1) clarify the differences compared to [1], (2) analyze performance both before and after fine-tuning, and (3) evaluate the proposed method on the OPT model.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics concerns found.\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Response to Reviewer uw4p (2)\", \"comment\": \"**An analysis of the role and importance of the last layers in LLMs.** Thank you for your thoughtful feedback. After pruning, the reduced parameter set may necessitate task-specific representation learning to optimize performance. Fine-tuning the top layers (those closer to the output) and the lm_head proves particularly effective since these layers are most directly associated with task-specific features.\\nBesides, fine-tuning the last layers is a widely recognized approach in transfer learning [1-3], further supporting our claim that last layers are directly associated with task-specific features. LoRA, on the other hand, modifies parameters indirectly and may not achieve the same level of task-specific optimization.\\n\\n**Experimental details.** Thank you for your thoughtful feedback. We appreciate your attention to the experimental details and your interest in the reproducibility of our results. We set the batch size and epoch to 64 and 2, respectively. We use the AdamW optimizer. The learning rate is set to 1\\u00d710\\u22125 with 100 warming steps. We encourage you to explore and experiment with the default settings in our codes. If there are still questions, we welcome additional questions or requests for clarification. We are committed to supporting reproducibility.\\n\\n[1] Zhuang, Fuzhen, et al. \\\"A comprehensive survey on transfer learning.\\\" Proceedings of the IEEE 109.1 (2020): 43-76.\\n\\n[2] Jang Y, Lee H, Hwang S J, et al. Learning what and where to transfer[C]//International conference on machine learning. PMLR, 2019: 3030-3039.\\n\\n[3] Huh, Minyoung, Pulkit Agrawal, and Alexei A. Efros. \\\"What makes ImageNet good for transfer learning?.\\\" arXiv preprint arXiv:1608.08614 (2016).\"}", "{\"title\": \"A Kind Reminder for Reviewer Dtqt\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your valuable feedback and thorough review of our paper. Your insights have greatly contributed to refining our work. In response to the specific concerns you raised, we have provided detailed explanations to address each concern comprehensively. Below, we summarize your concerns and our key responses:\\n\\n* **[W1: Some of the metrics don\\u2019t have a mark to indicate whether the lower or the higher it is.]:** We greatly appreciate your constructive feedback, as well as your support and recommendation for acceptance. We will carefully incorporate the suggested descriptions of metrics into our revision to further strengthen the manuscript.\\n\\nWe have incorporated your valuable suggestions into the revised manuscript. Thank you once again for your insightful feedback!\\n\\nIf our rebuttal has adequately addressed your concerns, we kindly request that you consider revising your score accordingly. An increased score is critically important to our work at this stage.\\n\\nWe remain open and glad to any additional questions or feedback you may have. Your efforts and detailed review are greatly appreciated, and we value the opportunity to improve our work based on your input. Thank you once again for your time and consideration. We look forward to your further feedback.\\n\\nBest regards,\\n\\nAuthors of Paper 137\"}", "{\"comment\": \"Dear Reviewer,\\n\\nCould you kindly respond and indicate whether authors have addressed your concerns?\\n\\nThanks, AC\"}", "{\"title\": \"Response to Reviewer uw4p\", \"comment\": \"**The insight may be seen as incremental.** While we agree that fine-tuning specific layers is a well-established practice in model compression and transfer learning on CNNs, we would like to emphasize that our work is, to the best of our knowledge, the first to propose and demonstrate the effectiveness of this approach in the context of pruning large language models (LLMs). As mentioned in Table 2 of our initial submission, our results highlight that fine-tuning only the last few remaining layers and lm_head achieves significant improvements in both performance and efficiency, particularly when compared to the popular LoRA fine-tuning. Notably, our findings highlight a unique advantage of LLM layer pruning, as LoRA and partial-layer fine-tuning exhibit comparable performance during full-model fine-tuning (as shown in Table 3 of our initial submission). This distinction offers new insights and practical contributions to the field of LLM pruning.\\n\\n We appreciate your recognition of the value of our findings and hope this clarification underscores the novelty of our work.\\n\\n**Comparisons with more recent and advanced layer pruning methods.** Thank you for pointing out the importance of comparing with more advanced layer pruning methods such as SLEB and FinerCut. While we recognize the contributions of FinerCut, its implementation is not publicly available, which prevents us from conducting a fair and reproducible comparison. To this end, we conduct experiments with SLEB on Llama-3.1-8B-It and Llama3-8B. For each pruning method, we prune 8 layers. Since SLEB is processed based on the inference-only approach without any finetuning, for a fair comparison, we only prune the model with the last 8 layers without finetuning. As shown in Table below, reverse-order pruning outperforms SLEB, which demonstrates its efficiency.\\n| Model | Method | PIQA | HellaSwag | OpenbookQA | ARC-e | ARC-c | MMLU | CMMLU | WinoGrande | Avg Acc |\\n|:------------------------:|:------------------------:|:----------:|:----------:|:----------:|:---------:|:---------:|:---------:|:---------:|:----------:|:-------:|\\n| Llama-3.1-8B-It | SLEB | 0.7252\\u00b10.0104 | 0.4415\\u00b10.0050 | 0.2380\\u00b10.0191 | 0.6423\\u00b10.0098 | 0.3166\\u00b10.0136 | 0.3396\\u00b10.0040 | 0.2756\\u00b10.0042 | 0.5888\\u00b10.0138 | 0.4192 |\\n| Llama-3.1-8B-It | Reverse-order | 0.7002\\u00b10.0107 | 0.4021\\u00b10.0049 | 0.2920\\u00b10.0204 | 0.6178\\u00b10.0100 | 0.3993\\u00b10.0143 | 0.6346\\u00b10.0039 | 0.5458\\u00b10.0045 | 0.6251\\u00b10.0136 | 0.5271 |\\n| Llama-3-8B | SLEB | 0.7111\\u00b10.0106 | 0.4401\\u00b10.0050 | 0.2280\\u00b10.0188 | 0.6014\\u00b10.0100 | 0.2807\\u00b10.0131 | 0.2674\\u00b10.0037 | 0.2502\\u00b10.0040 | 0.5683\\u00b10.0139 | 0.3689 |\\n| Llama-3-8B | Reverse-order | 0.6921\\u00b10.0108 | 0.4035\\u00b10.0049 | 0.3040\\u00b10.0206 | 0.6014\\u00b10.0100 | 0.3720\\u00b10.0141 | 0.5603\\u00b10.0040 | 0.4216\\u00b10.0045 | 0.5975\\u00b10.0138 | 0.4940 |\\n\\nBesides, to evaluate the effectiveness of partial-layer fine-tuning, we freeze the other layers and fine-tune only the last three remaining layers and lm_head using both Alpaca-cleaned and Dolly datasets. For SLEB, we use the same parameter settings as for partial-layer fine-tuning to perform LoRA fine-tuning on the Alpaca-cleaned dataset. The experimental results demonstrate that fine-tuning specific layers after pruning with the reverse-order method yields significantly better performance compared to fine-tuning with LoRA using the existing SLEB method.\\n\\n| Model | Method | Dataset | PIQA | HellaSwag | OpenbookQA | ARC-e | ARC-c | MMLU | CMMLU | WinoGrande | Avg Acc |\\n|:------------------------:|:------------------------:|:------------------------:|:----------:|:---------:|:----------:|:---------:|:---------:|:---------:|:---------:|:----------:|:-------:|\\n| Llama-3.1-8B-It | SLEB | Alpaca-cleaned| 0.7573\\u00b10.0100 | 0.4973\\u00b10.0050 | 0.2680\\u00b10.0198 | 0.6970\\u00b10.0094 | 0.3865\\u00b10.0142 | 0.4305\\u00b10.0041 | 0.3338\\u00b10.0044 | 0.6385\\u00b10.0135 | 0.5011 |\\n| Llama-3.1-8B-It | Reverse-order | Alpaca-cleaned | 0.7383\\u00b10.0103 | 0.5323\\u00b10.0050 | 0.3080\\u00b10.0207 | 0.7260\\u00b10.0092 | 0.4684\\u00b10.0146 | 0.6567\\u00b10.0038 | 0.5515\\u00b10.0045 | 0.6646\\u00b10.0133 | 0.5807 |\\n| Llama-3.1-8B-It | Reverse-order | Dolly | 0.7709\\u00b10.0098 | 0.5541\\u00b10.0050 | 0.3000\\u00b10.0205 | 0.7424\\u00b10.0090 | 0.4838\\u00b10.0146 | 0.6753\\u00b10.0038 | 0.5522\\u00b10.0045 | 0.7032\\u00b10.0128 | 0.5977 |\\n| Llama-3-8B | SLEB | Alpaca-cleaned | 0.7514\\u00b10.0101 | 0.5026\\u00b10.0050 | 0.2780\\u00b10.0201 | 0.7071\\u00b10.0093 | 0.3720\\u00b10.0141 | 0.3115\\u00b10.0039 | 0.2683\\u00b10.0041 | 0.5967\\u00b10.0138 | 0.3947 |\\n| Llama-3-8B | Reverse-order | Alpaca-cleaned | 0.7388\\u00b10.0102 | 0.5476\\u00b10.0050 | 0.3160\\u00b10.0208 | 0.7218\\u00b10.0092 | 0.4394\\u00b10.0145 | 0.6179\\u00b10.0038 | 0.4497\\u00b10.0045 | 0.6748\\u00b10.0132 | 0.5633 |\\n| Llama-3-8B| Reverse-order | Dolly | 0.7274\\u00b10.0104 | 0.5123\\u00b10.0050 | 0.3040\\u00b10.0206 | 0.6721\\u00b10.0096 | 0.4172\\u00b10.0144 | 0.6186\\u00b10.0038 | 0.4622\\u00b10.0045 | 0.6811\\u00b10.0131 | 0.5494 |\"}", "{\"title\": \"Response to Reviewer uw4p (3)\", \"comment\": \"We appreciate the reviewer\\u2019s detailed feedback and their acknowledgment of the effectiveness of our findings. Below, we address the primary concerns raised and provide further clarifications.\\n\\n**1. Clear Aim and Motivation**\\n\\nWe respectfully disagree with the characterization that our work is \\\"shooting in the dark and pretending you aimed.\\\" As stated in our abstract, our work seeks to answer an important and practical question: What are the best practices for layer pruning in large language models (LLMs)? While there exist many sophisticated layer pruning strategies, often requiring complex metrics or extra validation data, their true effectiveness remains unclear. Our goal is to explore whether a simple and effective alternative exists. Thus, the aim of this work is not only clear but also highly relevant to the community.\\n\\n**2. Regarding the Two Scenarios of Contribution**\", \"we_address_the_two_scenarios_identified_by_the_reviewer\": \"* **Compelling Problem or Motivation and Effective Solutions.** The problem is clearly stated in our abstract: What are the best practices for layer pruning in LLMs? This is a compelling problem because, despite the growing importance of model compression, the best practices for layer pruning remain unknown. Our results, acknowledged as effective in the review (e.g., \\\"reverse-order pruning... is effective\\\"), confirm that our work addresses this problem successfully.\\n\\n* **Unusual or Unexpected Observation and In-Depth Analysis.** The effectiveness of reverse-order pruning and partial-layer fine-tuning is indeed \\\"unusual\\\" within the context of existing literature. The common expectation is that more sophisticated strategies would outperform simpler approaches, yet our findings challenge this assumption. Specifically:\\n\\n * **Reverse-Order Pruning:** Our analysis reveals that pruning the final layers, which are more task-specific and prone to overfitting, retains the general embedding extraction capabilities crucial for broader generalization. This insight aligns with the understanding of how LLMs manage task-specific versus general representations.\\n\\n * **Partial-Layer Fine-Tuning:** We show that fine-tuning the final layers and the lm_head is particularly effective because these layers are closest to the output and most associated with task-specific features. This approach aligns with widely recognized practices in transfer learning [1-3]. In contrast, methods like LoRA modify parameters indirectly, which may limit their ability to achieve the same level of task-specific optimization.\\nThese explanations, as reiterated in our previous rebuttal, provide a theoretical grounding for our empirical observations.\\n\\n**3. Generalization and Dataset Dependence**\\n\\nRegarding concerns about the generalization of our findings, we note that our fine-tuning experiments used Alpaca and Dolly datasets, both of which are widely adopted and not specifically designed for our test datasets (e.g., PIQA, HellaSwag, OpenbookQA, ARC-e, ARC-c, MMLU, CMMLU, and WinoGrande). The numerical results across these diverse datasets consistently demonstrate strong generalization, addressing concerns about reliance on specific datasets. Besides, we have conducted experiments to evaluate the performance of pruned models **without fine-tuning on any datasets** and directly compared these results with SLEB (see the \\u201c**Comparisons with more recent and advanced layer pruning methods.**\\u201d of **Response to Reviewer uw4p**). \\n\\nWe hope this clarifies our contributions, addresses concerns about the rationale and theoretical insight behind our findings, and demonstrates the novelty and generalizability of our findings. \\n\\n[1] Zhuang, Fuzhen, et al. \\\"A comprehensive survey on transfer learning.\\\" Proceedings of the IEEE 109.1 (2020): 43-76.\\n\\n[2] Jang Y, Lee H, Hwang S J, et al. Learning what and where to transfer[C]//International conference on machine learning. PMLR, 2019: 3030-3039.\\n\\n[3] Huh, Minyoung, Pulkit Agrawal, and Alexei A. Efros. \\\"What makes ImageNet good for transfer learning?.\\\" arXiv preprint arXiv:1608.08614 (2016).\"}", "{\"title\": \"Follow up for response (5)\", \"comment\": \"1. I believe the 6.3B model shows some limitations, primarily due to the lack of flexibility in your method. It is beneficial to explore and provide results with a broader range of sparsity ratios to enhance adaptability and performance insights.\\n\\n2.a. My concern about the accuracy obtained without fine-tuning is not about comparing it to other methods but rather about its performance relative to fine-grained pruning methods that do not adopt fine-tuning. Specifically, I noticed that your method results in a significant accuracy drop compared to the dense model, which raises questions about its effectiveness in this context (Llama-3.1-8B-It: 62.99% -> 52.71%; LLaMA3-8B: 60.93% -> 49.4%).\\n\\n2.b. The main point I want to emphasize is that further fine-tuning can diminish the original potential of LLMs, as the quality of the fine-tuning dataset often falls short when compared to the pretraining dataset. \\n\\n2.c. Regarding the generalization of calibration, it is acknowledged that there is some bias toward the dataset. However, based on the ablation study results presented in Table 7 of work [1] and Table A2 of work [2], the generalization remains relatively robust, as evidenced by the minimal variation in accuracy across different datasets. This suggests that the calibration approach retains consistent performance despite the inherent dataset-specific biases.\\n\\n3. I acknowledge that the layer pruning is different from the fine-grained pruning methods. However, my main point is, according to the analysis in [1] [3], there is redundant layers in the front layers of LLMs. I think you should give more results about this. For example, for around 20% sparsity in 7B (8B) model which contains only 32 layers, it will not take much resource to search for the optimal strategy of pruning 6 layers (or 5 layers) in the 32 layers. This kind of ablation study is necessary to show the effectiveness of your conclusion on the reserve-pruning. \\n\\n4. As for the computational resources, those fine-grained methods [1] [2] [3] [4] [5], which do not adopt fine-tuning or even weight update, do not require the backward progress, which saves more resources than the methods that require fine-tuning. Even the work [2], which adopts the search for LLMs, can achieve comparable efforts to those pruning works [1] [3] [4]. Thus, I do not think the computational resources is kind of overhead to those methods.\\n\\n[1] Fluctuation-based Adaptive Structured Pruning for Large Language Models\\n\\n[2] Search for Efficient Large Language Models\\n\\n[3] LLM-Pruner: On the Structural Pruning of Large Language Models\\n\\n[4] SliceGPT: Compress Large Language Models by Deleting Rows and Columns\\n\\n[5] A Simple and Effective Pruning Approach for Large Language Models\"}" ] }
EjCrfVFZTx
Investigating the Effectiveness of HyperTuning via Gisting
[ "Jason Phang" ]
Gisting (Mu et al., 2023) is a simple method for training models to compress information into fewer token representations using a modified attention mask, and can serve as an economical approach to training Transformer-based hypernetworks. We introduce HyperLlama, a set of Gisting-based hypernetworks built on Llama-2 models that generates task-specific soft prefixes based on few-shot inputs. In experiments across P3, Super-NaturalInstructions and Symbol Tuning datasets, we show that HyperLlama models can effectively compress information from few-shot examples into soft prefixes. However, they still underperform multi-task fine-tuned language models with full attention over few-shot in-context examples. We also show that HyperLlama-generated soft prefixes can serve as better initializations for further prefix tuning. Overall, Gisting-based hypernetworks are economical and easy to implement, but have mixed empirical performance.
[ "hypernetworks", "llm", "parameter-efficient fine-tuning", "prefix tuning" ]
Reject
https://openreview.net/pdf?id=EjCrfVFZTx
https://openreview.net/forum?id=EjCrfVFZTx
ICLR.cc/2025/Conference
2025
{ "note_id": [ "kkgrD6cvWg", "b78Ri3LLn9", "Y38KAHSeyO", "RKtwoOYw9t", "EZtKn5c38k", "90GZkhFr4p", "3ROUk8Jb2b" ], "note_type": [ "decision", "official_review", "meta_review", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1737523452953, 1731083057626, 1734577640462, 1730600095553, 1730542607813, 1729775052263, 1729580625139 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1440/Reviewer_5q58" ], [ "ICLR.cc/2025/Conference/Submission1440/Area_Chair_YUkh" ], [ "ICLR.cc/2025/Conference/Submission1440/Reviewer_TYJ2" ], [ "ICLR.cc/2025/Conference/Submission1440/Reviewer_C8AB" ], [ "ICLR.cc/2025/Conference/Submission1440/Reviewer_5q1N" ], [ "ICLR.cc/2025/Conference/Submission1440/Reviewer_76re" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper introduce a set of Gisting based hyper network called HyperLlama for generating soft prefix tokens for downstream tasks. The prefix tokens acts similar to few shot example in in-context learning. The experiments show that their HyperLlama is effective in generating soft prefix tokens, but they underperformed compared to multi-task fine-tuned models with attention to in-context examples.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. The paper is easy to read.\\n2. The paper has a very detailed discussion on each experiment.\", \"weaknesses\": \"It seems that the paper has no technical contribution to the community. The introduced HyperLlama follows the setting of HyperTuning.\\nThe paper introduced a set of new models (HyperLlama) for sure, but I do not think it is a valid technical contribution. Further, the introduced models generally underperform few-shot language models by large margins.\", \"questions\": \"In second paragraph of 3.2, the authors says they use QLoRA for hyper network training, but in the last paragraph of 3.3, the authors states they fine-tuned all model parameters and no LoRA is used. I am a bit confused.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper received ratings of 5, 6, 5, 3, 3, where the reviewers assigned mixed-to-low ratings, primarily citing concerns over limited novelty, weak empirical results, and lack of clarity regarding the practical benefits of the proposed approach.\\n\\nThe submission introduces HyperLlama, a Gisting-based hypernetwork that leverages modified attention mechanisms to generate task-specific soft prefixes for large language models (LLMs).\", \"strengths\": [\"The idea of using Gisting-based hypernetworks for economical computation is conceptually interesting.\", \"The proposed methodology has the potential to reduce computational costs during inference.\"], \"area_for_improvement\": [\"Experimental results are not convincing: HyperLlama fails to achieve competitive performance in critical benchmarks compared to fine-tuned and in-context learning models.\", \"Unclear practical relevance: the benefits of using HyperLlama in realistic scenarios remain not clera due to its limited effectiveness.\", \"Presentation Issues: The manuscript lacks clarity in explaining certain methodological choices, especially around compression hyperpretraining and task-specific adaptation strategies.\", \"While the proposed approach demonstrates some conceptual promise, its empirical weaknesses and unclear practical relevance outweigh its strengths. The authors are encouraged to further refine their methodology, improve its performance, and provide more compelling evidence of its utility in future work.\"], \"additional_comments_on_reviewer_discussion\": \"The discussion between authors and reviewers was active, with the authors addressing key points raised in the initial reviews. Reviewers acknowledged the effort made to clarify and contextualize the work.\\n\\nThe authors made a commendable effort to address reviewers' concerns, providing clarifications on the use of hypernetworks, evaluation metrics, and the limitations of their approach. Their willingness to engage with feedback reflects a positive and collaborative attitude.\\nWhile some responses offered valuable context, important explanations seem not sufficiently detailed. Additional empirical evidence or comparisons would have helped substantiate their claims. \\n\\nThe rebuttal and subsequent discussion did not result in a significant shift in the reviewers\\u2019 perspectives. Concerns about the novelty, practical contributions, and consistent underperformance of the approach persisted and remain unresolved. For instance, the reviewers highlighted the limited novelty of the proposed approach and its underperformance compared to baseline models with direct access to few-shot examples. The authors are encouraged to address these issues/concerns and consider a resubmission.\"}", "{\"summary\": [\"This study introduces HyperLlama, a Gisting-based hypernetwork designed to generate soft prefixes for a frozen downstream Llama-2 model.\", \"Through experiments on P3, S-NI, and Symbol Tuning datasets, they demonstrate that HyperLlama can compress few-shot example information into soft prefixes,\", \"HyperLlama-generated soft prefixes also serve as strong initializations for further prefix tuning, supporting efficient fine-tuning.\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper is written in a way that makes the field and methodology easy to understand.\", \"weaknesses\": \"The Discussion section describes the strengths and weaknesses of HyperLlama well, but these aspects are not fully addressed in the paper. For example, while it is mentioned that HyperLlama saves resources during inference, the authors should provide numerical evidence to support this.\\n\\nI believe that the Gist (Mu et al., 2023) and the HyperTuning (phang et al. 2023) have both made substantial contributions to this field, and I have some concerns that this paper primarily builds on these approaches by combining them in its experiments.\", \"questions\": \"(1) What are the advantages of this paper's approach compared to existing Gist methods? Additionally, I would like to understand the benefits of separating the model components.\\n(2) As highlighted in the title, it would be helpful if the paper demonstrated the effectiveness of the method with numerical evidence compared to existing methods.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors introduce HyperLlama, a Gisting-based hypernetwork designed to generate soft prefixes for a frozen downstream Llama-2 model. Their experiments across diverse datasets demonstrate that HyperLlama effectively compresses information from few-shot examples into soft prefixes, outperforming baselines that lack access to additional examples. Additionally, they show that HyperLlama-generated soft prefixes provide superior initializations for further prefix tuning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The introduction of HyperLlama leverages Gisting to generate soft prefixes, compressing task-specific information efficiently.\\n\\n2. Comprehensive experiments across multiple datasets demonstrate HyperLlama's strengths, especially in initializing soft prefixes for prefix tuning, showing improved performance over random initializations.\", \"weaknesses\": \"1. The paper lacks clarity in several areas, including the introduction of motivation, the description of the methodology, and the presentation of experimental results and figures.\\n\\n2. The experiments are limited to the Llama-2-7B model, and there are no evaluations on newer models (such as Llama-3) or larger-scale models (like 13B), which limits the generalizability of the findings.\\n\\n3. HyperLlama struggles with tasks that rely on precise output formats or highly contextual few-shot examples, affecting its generalizability.\\n\\n4. The method\\u2019s effectiveness is contingent on Gisting\\u2019s ability to compress information accurately, which may vary across different types of tasks.\", \"questions\": \"Just like the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduced a Gisting-based hypernetworks. This hypernetwork is built on Llama-2 and is able to generate task-specific soft prefixes based on few-shot inputs. To do so, the authors use two LLMs. One acts as the hypernetwork and the other one acts as the downstream network. To enable training, the hypernetwork is trained with QLoRA (plus additional embeddings) to produce Gist tokens to append to the downstream network.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1, The figures are good and especially helpful for understanding the main idea of this paper.\\n\\n2, This paper has detailed training details.\", \"weaknesses\": \"1, Table 1 has not been cited in the paper. What is its role in this paper?\\n\\n2, At training, the hypernetwork and the downstream network are quantized to 4-bit. Are they full-precision when used in the test stage? If it is, how to solve the gap between the quantized model and the full-precision model?\\n\\n3, To generate Gist tokens, two forwards with LLM are needed, which may incur high costs.\\n\\n4, Even though I am not an expert in the area, I still recommend the author reorganize this paper for clear expression.\", \"questions\": \"Please answer the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work proposes a novel HyperLlama, which uses a hypernetwork to generate gist tokens to capture information from few-shot examples. The author provides sufficient experimental evidence that HyperLlama performs well on P3 and S-IN datasets, but performs not that good on Symbolic-tuning task. However, this work needs to strengthen its analysis of HyperLlama's performance deficiencies.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper proposes a novel method to generate gist tokens using hypernetwork, which can compress few shot examples of different tasks into gist tokens.\\n2. The author's motivation for designing this hypernetwork is very clear, and sufficient experiments are provided to prove the good performance of gist token on P3 and S-IN datasets, and its insufficient performance on Symbolic-tuning task. \\n3. The author has demonstrated through sufficient experiments that using the soft token generated by the hypernetwork as the initial token of prefix finetuning can improve the efficiency of prefix finetuning.\", \"weaknesses\": \"1. As noted in Section 3.3, \\\"Compression Hyperpretraining\\\" is essential for the downstream model (specifically the Llama 2 model in this study) to support gist tokens. However, it remains uncertain whether this additional pretraining step (\\\"Compression Hyperpretraining\\\"), following the standard pretraining, might adversely affect the model's performance on other tasks.\\n2. As discussed in Section 5, HyperLlama does not exhibit superior performance on the Symbolic-turing task. The paper lacks an in-depth analysis of the factors contributing to this shortcoming. Specifically, it is not clear whether the hypernetwork is unable to extract all relevant information from the few-shot examples, or if the gist tokens are insufficient in storing the information from the few-shot examples.\\n3. The study lacks an ablation analysis regarding the number of few-shot examples and the number of gist tokens. In other words, it is not explored whether increasing the number of few-shot examples, while keeping the number of gist tokens constant, would lead to an improvement in performance.\", \"questions\": \"Please refer to the Weakness section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
EispKqtw5B
Stochastic Layer-Wise Shuffle: A Good Practice to Improve Vision Mamba Training
[ "Zizheng Huang", "Haoxing Chen", "Jiaqi Li", "jun lan", "Huijia Zhu", "Weiqiang Wang", "Limin Wang" ]
Recent Vision Mamba models not only have much lower complexity for processing higher resolution images and longer videos but also the competitive performance with Vision Transformers (ViTs). However, they are stuck into overfitting and thus only present up to base size (about 80M). It is still unclear how vanilla Vision Mamba (Vim) can be efficiently scaled up to larger sizes, which is essentially for further exploitation. In this paper, we propose a stochastic layer-wise shuffle regularization, which empowers successfully scaling non-hierarchical Vision Mamba to a large size (about 300M) in a supervised setting. Specifically, our base and large-scale ShuffleMamba models can outperform the supervised ViTs of similar size by 0.8% and 1.0% classification accuracy on ImageNet1k, respectively, without auxiliary data. When evaluated on the ADE20K semantic segmentation and COCO detection tasks, our ShuffleMamba models also show significant improvements. Without bells and whistles, the stochastic layer-wise shuffle has the following highlights: (1) Plug and play: it does not change model architectures and will be omitted in inference. (2) Simple but effective: it can improve the overfitting in Vim training and only introduce random token permutation operations. (3) Intuitive: the token sequences in deeper layers are more likely to be shuffled as they are expected to be more semantic and less sensitive to patch positions.
[ "Vision Mamba", "Supervised Learning", "Training Regularization", "Computer Vision" ]
https://openreview.net/pdf?id=EispKqtw5B
https://openreview.net/forum?id=EispKqtw5B
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zd4zWi9aP1", "U5fdqmA93P", "Jz70Y9o6ex", "GzDtaKVut6", "BwY37ltLWo" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730790676455, 1730790365648, 1731457618781, 1730785679607, 1730499413238 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3011/Reviewer_ay2Y" ], [ "ICLR.cc/2025/Conference/Submission3011/Reviewer_G4qT" ], [ "ICLR.cc/2025/Conference/Submission3011/Authors" ], [ "ICLR.cc/2025/Conference/Submission3011/Reviewer_3nFh" ], [ "ICLR.cc/2025/Conference/Submission3011/Reviewer_YKke" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents a stochastic layer-wise shuffle regularization method that successfully scales non-hierarchical visual mambas to large sizes in a supervised setting. It is experimentally verified that the proposed large-scale ShuffleMamba model outperforms a similar-sized supervised vit by 0.8% and 1.0% in classification accuracy on ImageNet1k without auxiliary data, and outperforms a similar-sized ViT on detection and segmentation tasks, respectively. Overall, this paper mitigates the overfitting problem of the large-scale vanilla vision mamba.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1,Clear motivation. Based on a token shuffling process to enhance the positional transformation invariance and a layer-dependent probability assignment according to the layer perception assumption.\\n\\n2,Good performance and efficiency. Being a plug-and-play algorithm, the approach neither incurs heavy training costs nor changes the architecture of the visual mamba. Moreover, it outperforms existing visual mamba models on classification, detection, and segmentation tasks, and outperforms similarly sized ViTs.\\n\\n3, well-written and clear presentation.\", \"weaknesses\": \"1, Inconsistent results. The number of parameters of ShuffleMamba-S in Table 1 is 7M, while it is 26M in Table 2.\\n\\n2, Selective comparison. The convolution and transformer based methods that the authors compare are all before 2022, so the author's claim of superiority over a ViT model of the same size seems unreasonable to me. It is recommended to add recent methods such as [1-2] for comparison . However, from my observation, it seems that shuffle-mamba does not achieve better performance compared to convolution-based model [1] and transformer-based model [2] at the same model size scale. \\nMoreover, shuffle-mamba of similar size achieves significantly lower results compared to the recent VMamba (81.2 vs. 82.5).\\n\\n3, Why the Stochastic Layer-Wise Shuffle training regularization algorithm can only be applied to nonhierarchical vision mamba?Purely from the algorithmic view, this is actually a disruption of token positions to reduce model overfitting. Therefore, this algorithm can also be used for hierarchical vision mamba such as VMamba and MSVMamba.\\n\\n4, Are different shuffle methods explorable? For example, shuffling on a localized scale versus shuffling the whole image?\\n\\n5, I did not find details of the shuffle-mamba design. The author's description of the structure of the whole model is rather vague. I think it is necessary for the authors to add that part of the description, especially since I observed that the authors actually have blank space to support the operation.\\n\\n\\n[1] InceptionNeXt: When Inception Meets ConvNeXt (CVPR2024)\\n[2] TransNeXt: Robust Foveal Visual Perception for Vision Transformers (CVPR2024)\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper argues that vanilla Vision Mamba models face an overfitting problem when scaling up, and proposes to overcome it by performing a layer-wise random token shuffle for regularization during training. The proposed ShuffleMambas are based on vanilla Vision Mamba and Mamba-reg in a plug-and-play manner. The token shuffling is performed with a layer-dependent probability.\\n\\nShuffleMamba shows higher training loss and lower evaluation loss with trivial degradation in terms of training throughput. Experiments show improved classification, semantic segmentation and object detection/instance segmentation results compared with baselines.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper is well-organized and easy to follow.\\n\\n2. Experiments on different architectures, model sizes and downstream tasks provide a comprehensive evaluation of the proposed Stochastic Layer-Wise Shuffle.\", \"weaknesses\": \"1. Minor improvement compared to baselines: from the main experimental results from Table 2, Mamba-Reg-B(99M) - 83.0% v.s. ShuffleMamba-Reg-B(98M) - 83.1%, Mamba-Reg-L(341M) - 83.6% v.s. ShuffleMamba-Reg-L2(341M) - 83.6% showing minor or no performance improvement when equipped with proposed SLWS.\\n\\n2. As SuffleMamba-S/M/B/L1 are based on vanilla Vision Mamba (Vim), why the Vim-M/B/L1 performance was not reported in Table 2? Considering the training and evaluation losses of Vim-M have already been reported in Figure 2, the missing experimental comparisons reduce the confidence in the proposed SLWS, especially when SLWS is claimed to relieve the overfitting issue of Vim models in this paper. I suggest the authors also include Vim-M/B/L1 as baselines to provide a more direct comparison if possible. \\n\\n3. While standard training and testing image resolution are both 224x224 in this paper, the motivation for an additional 256\\u00d7256 testing resolution remains unclear. There are limited words about the generalization ability to different image sizes and why ShuffleMamba has the potential to outperform on larger testing resolutions. Can the authors provide further clarification on the 256-resolution testing for better understanding of the proposed method?\", \"questions\": \"1. As SuffleMamba-S/M/B/L1 are based on vanilla Vision Mamba (Vim), why the Vim-M/B/L1 performance was not reported in Table 2 to provide a more direct comparison? Please refer to Weaknesses 2.\\n\\n2. What is the motivation for an additional 256x256 test-time resolution? Please refer to Weaknesses 3.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper tries to address the scalability limitations of recent Vision Mamba (Vim) models, which, despite their lower computational complexity and competitive performance relative to Vision Transformers (ViTs), are constrained to approximately 80 million parameters due to overfitting issues. To overcome this, the authors propose a novel stochastic layerwise shuffle regularization technique that enables the successful scaling of non-hierarchical Vision Mamba models up to around 300 million parameters in supervised settings. The resulting ShuffleMamba models demonstrate superior performance, achieving 0.8% and 1.0% higher classification accuracy on ImageNet1k compared to similarly sized supervised ViTs without relying on auxiliary data. Additionally, these models exhibit significant enhancements in ADE20K semantic segmentation and COCO detection tasks. Notably, the proposed regularization method is plug-and-play, does not modify the existing model architecture, and is excluded during inference. Its simplicity lies in introducing random token permutations to mitigate overfitting, with an intuitive approach where deeper layer feature tokens are shuffled more frequently due to their increased semantic robustness and reduced sensitivity to patch positions. This work presents a compelling advancement in scaling Vision Mamba models, potentially broadening their applicability in various high-resolution and long-duration vision tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. the layer-wise shuffle is simple and bring no extra cost during inference.\", \"weaknesses\": \"1. Poor performance. Compared with Mamba-Reg, ShuffleMamba achieves worse performance. Besides, when work together with Mamba-Reg, Mamba-Reg-B makes no improvements over Mamba-Reg.\\n2. This paper claims that \\\" It is still unclear how vanilla Vision Mamba (Vim) can be efficiently scaled up to larger sizes\\\" However, some work[1, 2] has already scale Vim to large and even huge size.\\n3. Lack of speed comparison. Please report the inference time.\\n\\n[1] Wang, F., Wang, J., Ren, S., Wei, G., Mei, J., Shao, W., ... & Xie, C. (2024). Mamba-r: Vision mamba also needs registers. arXiv preprint arXiv:2405.14858.\\n[2] Ren, S., Li, X., Tu, H., Wang, F., Shu, F., Zhang, L., ... & Xie, C. (2024). Autoregressive Pretraining with Mamba in Vision. arXiv preprint arXiv:2406.07537.\", \"questions\": \"Can you report the inference time?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a regularization method called Stochastic Layer-Wise Shuffle (SLWS) to enhance the performance of Vision Mamba (Vim). The Vim model, when integrated with SLWS, is referred to as ShuffleMamba. The authors highlight a limitation of current plain Vim architectures in modeling local neighborhood relationships due to their corner-to-corner scanning pattern within the state space model. They propose that applying SLWS to shuffle token positions in deeper layers can improve large-scale vanilla Vim training, helping to mitigate overfitting. Experiments are conducted on ImageNet-1K classification, MS COCO detection, and ADE20K semantic segmentation tasks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The paper is well-organised.\", \"Motivation is clear and strong.\", \"extensive experiments are conducted on downstream vision tasks.\"], \"weaknesses\": [\"The SLWS approach disrupts the inherent locality of image data. Stochastic shuffling of input token order risks causing significant performance degradation, particularly in dense prediction tasks where positional information is crucial. Even for classification tasks, deeper layers rely on coarse positional cues to distinguish object shapes, as evidenced in feature maps.\", \"Table 5 in [2] shows that a random scanning order results in lower performance compared to other scanning patterns, which contradicts the claimed effectiveness of SLWS.\", \"Figure 2(a) is confusing due to the dual-axis design (one axis on each side). Additionally, training errors are consistently much greater than evaluation errors along the x-axis, which appears unreasonable.\", \"The paper does not compare results with recent work, specifically ARM [2], which also addresses scaling ViM. At comparable model complexities, ARM-B (83.2%) outperforms ShuffleMamba-B (82.6%) on ImageNet-1K, and ARM-B (84.5%) surpasses ShuffleMamba-Reg-L2 (83.6%). Additionally, when scaled further, ARM-H with 662M parameters reaches 85.5% accuracy, whereas ShuffleMamba\\u2019s scaling potential appears to plateau at 341M parameters.\", \"[2] Ren, Sucheng, et al. \\\"Autoregressive Pretraining with Mamba in Vision.\\\" arXiv preprint arXiv:2406.07537 (2024).\"], \"questions\": \"Please refer to the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
EiYr9ArUFl
Gathering and Exploiting Higher-Order Information when Training Large Structured Models
[ "Pierre Wolinski" ]
When training large models, such as neural networks, the full derivatives of order 2 and beyond are usually inaccessible, due to their computational cost. This is why, among the second-order optimization methods, it is very common to bypass the computation of the Hessian by using first-order information, such as the gradient of the parameters (e.g., quasi-Newton methods) or the activations (e.g., K-FAC). In this paper, we focus on the exact and explicit computation of projections of the Hessian and higher-order derivatives on well-chosen subspaces, which are relevant for optimization. Namely, for a given partition of the set of parameters, it is possible to compute tensors which can be seen as "higher-order derivatives according to the partition", at a reasonable cost as long as the number of subsets of the partition remains small. Then, we propose an optimization method exploiting these tensors at order 2 and 3 with several interesting properties, including: it outputs a learning rate per subset of parameters, which can be used for hyperparameter tuning; it takes into account long-range interactions between the layers of the trained neural network, which is usually not the case in similar methods (e.g., K-FAC); the trajectory of the optimization is invariant under affine layer-wise reparameterization.
[ "neural networks", "Hessian", "learning rate", "projections", "optimization" ]
Reject
https://openreview.net/pdf?id=EiYr9ArUFl
https://openreview.net/forum?id=EiYr9ArUFl
ICLR.cc/2025/Conference
2025
{ "note_id": [ "s9NbGBzPIr", "oJFWrJYyAK", "jYhqYW86Im", "gXRSRw1B9k", "fkQojw6tK0", "dbKf9mAShy", "al4JQqm6Dv", "Yb1HLjFOtW", "TRWNevrW99", "LCqZbBseaw", "8l0FL14s3J", "6V0eQNbjxU", "5nkebxKQoC", "3h1MQ52qSz", "29Uc3jn1Q5" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "decision" ], "note_created": [ 1730636988733, 1732289584516, 1732207514916, 1733219966989, 1730778235669, 1732711384544, 1732710839502, 1733091169572, 1732653396982, 1734432032376, 1731096222792, 1732793715123, 1732288945727, 1732288705814, 1737523555478 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3114/Reviewer_PhVC" ], [ "ICLR.cc/2025/Conference/Submission3114/Authors" ], [ "ICLR.cc/2025/Conference/Submission3114/Authors" ], [ "ICLR.cc/2025/Conference/Submission3114/Authors" ], [ "ICLR.cc/2025/Conference/Submission3114/Reviewer_Hw9J" ], [ "ICLR.cc/2025/Conference/Submission3114/Authors" ], [ "ICLR.cc/2025/Conference/Submission3114/Authors" ], [ "ICLR.cc/2025/Conference/Submission3114/Reviewer_PhVC" ], [ "ICLR.cc/2025/Conference/Submission3114/Reviewer_Hw9J" ], [ "ICLR.cc/2025/Conference/Submission3114/Area_Chair_wTAM" ], [ "ICLR.cc/2025/Conference/Submission3114/Reviewer_6Jm1" ], [ "ICLR.cc/2025/Conference/Submission3114/Authors" ], [ "ICLR.cc/2025/Conference/Submission3114/Authors" ], [ "ICLR.cc/2025/Conference/Submission3114/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"The authors suggest a layer-wise partitioning of order $d$ derivatives, that transforms the $R^{p^d}$ tensor to a $R^{S^d}$ tensor which is computationally tractable even in deep networks. The authors then leverage this partitioning to first compute empirically the Hessian for some deep neural networks, and subsequently suggest a second order method based on partitioning for optimization.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper has several strengths:\\n\\n1) The interest in efficient computational schemes of higher order information for deep neural networks is significant, and improved methods of estimating the Hessian, as well as higher order terms could help improve interpretability and shed light on what neural networks learn during optimization.\\n2) The partitioning scheme is the main contribution of this paper, and seems to be novel as far as I know, with possible applications to many interesting avenues.\", \"weaknesses\": \"In spite of the strengths, the paper has some clear drawbacks in my opinion:\\n\\n1) The paper is not written well enough, with a substantial lack of a literature survey on properties of Hessians in deep networks, as well as works on second order methods from recent years. In terms of the writing itself, all of the equations on page 5 are unnumbered making them hard to refer to, and the chosen notation for tensor contraction $A[ u, u... u]$ is not standard and never explained. It must be understood from context in the main text or the appendices.\\n2) The main contribution - the fast computation of lower complexity tensor containing similar information as the original tensor is not given in detail in the main text, and even in the appendix should be explained explicitly.\\n3) In general, I believe the focus of the paper is completely misguided. It seems clear that the suggestion of the second order optimization method is not superior (at least in its current form) to other simple gradient based methods, and should then not be the focus of the paper. Instead, if the paper focused on the partitioning/computation methods and then applied this to real-world or even particular solvable examples, extending Sec. 5.1, the paper would be much stronger.\", \"questions\": \"1) L43 - I'm not sure this is correct, assuming unlimited compute, and a perfectly known Hessian, even if the Hessian is singular you can still invert it on the nonsingular subspace, using SVD and find the pseudo-inverse.\\n2) L71 - Why do the authors need to regularize instead of computing just the pseudo inverse? is it inefficient? this should be stated explicitly\\n3) L76 - \\\"so its preserves \\\"\\n4) L218-220 seems wrong, the d-derivative is a map from $\\\\mathbb{R}$ to $\\\\mathbb{R}^{p^d}$ and not the other way around, even if the intention was a map from weight space to the operator output it should be from $\\\\mathbb{R}^p$ to $\\\\mathbb{R}^{p^d}$, so this is unclear to me. Additionally, the second term is unclear, $u$ are not defined at this point and the brackets $[.]$ are undefined as well. In the appendix it seems clear that the intention is tensor contraction between the previous term and the brackets but this is not standard.\\n5) L258 - \\\"Therefore, the tensors $D_\\\\theta^d(u)$ extract more information than the naive Taylor terms, while keeping a reasonable computational cost. \\\" why is this statement obvious \\\"therefore\\\"? is the statement that regular Taylor terms lose the layer-wise structure of the network? I understand that the equality between Taylor and this decomposition is only obtained after tracing out the $s_i$ indices, but what is the intuition? it would be useful to show explicitly for a low $d$ derivative with a fixed number of parameters to illustrate the difference.\\n6) the authors don't comment on the compute time of their method compared to single gradient based method (Which seem to be better so far), it would seem like $t*p*S$ vs $t*p$, making it substantially slower for deep networks. \\n7) While the method provided in this text is not shown to be superior to standard algorithms, it might be interesting to consider the computation method in the context of sharpness aware minimization.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Once again, we thank all reviewers for their comments, suggestions, and questions. We are pleased that all reviewers showed interest, if not enthusiasm, for our work and made relevant suggestions for improvement.\\n\\nSeveral sections have been added and placed at the end of the appendix to maintain the current section numbering, but we plan to reorder these in future revisions of the paper.\\n\\n# Quality of writing\\n\\nWe take the comments of Reviewers Hw9J and PhVC about the quality of writing seriously. Therefore, we have performed many corrections in the main text (highlighted in blue). We hope that the writing quality of the paper is now sufficient.\\n\\n# Comparison of training times\\n\\nIt is true that a comparison of training times is missing, as pointed out by Reviewers 6Jm1 and PhVC. Without such a comparison, one could legitimately think that our method is very expensive.\\n\\nTo fix this, we have added Appendix M and a table comparing the training times: the training with our method is not excessively longer than with K-FAC, and is of the same order as Adam/K-FAC on small networks.\\n\\n# Appendices J and K\\n\\nIf possible, we encourage the reviewers to take a look at Appendices J and K. Appendix K contains a study showing the importance of keeping off-diagonal coefficients of $\\\\bar{\\\\mathbf{H}}$, which illustrates the value of our method. Appendix J shows the influence of the choice of the partition of the parameter space on the training time and the final loss.\\n\\n# Notations in Section 3\\n\\nReviewer PhVC pointed out that the explanation of the notations used in Section 3 was not sufficient. We agree with this view, and we have worked hard to make this as clear as possible in the new versions of the paper:\\n 1. several paragraphs of Section 3 have been rewritten;\\n 2. Appendix A has been improved;\\n 3. some notations have been improved;\\n 4. Appendix L has been added to explain thoroughly our notation.\\n \\nHowever, due to space limitations, we have not included these explanations in the main text. Otherwise, we would have to remove essential parts of the paper. \\n\\n# Interpretations of the non-regularized optimization method\\n\\nFollowing the suggestion of Reviewer Hw9J, we have added two paragraphs in Appendix B showing how to derive our method in a simpler way.\"}", "{\"comment\": \"We thank Reviewer PhVC for the review and the insightful remarks and suggestions. We also appreciate that the reviewer acknowledges our motivation and the relevance of our method.\\n\\n# Quality of writing\\n\\nWe have made many corrections (highlighted in blue). We hope that the writing quality of the paper is now sufficient.\\n\\nThe equations have been numbered.\\n\\n# Higher-order derivatives of a multivariate function\\n\\nThe reviewer raises an important clarity issue, since it is crucial to understand the notation used in Section 3 to understand the rest of the paper.\\n\\nIn short, we have dropped the \\\"linear form\\\" formalism and adopted fully the \\\"tensor\\\" formalism for the derivatives. For tensor contraction, to our knowledge, Einstein notation is the standard, but it is cumbersome, and we feel that it is too complicated for our use. So, we used a notation similar to [6, 7], based on [5] (see Chapter VIII.12), and detailed for instance in http://virtualmath1.stanford.edu/~conrad/diffgeomPage/handouts/taylor.pdf .\\n\\nAnyway, we have improved Section 3 and Appendix A, and we have added Appendix L to make our notation as clear as possible.\\n\\n# Use of pseudo-inverse in optimization\\n\\nWe thank the reviewer for the suggestion. However, the pseudo-inverse has many drawbacks, which are not related to the computational cost. Notably, **after checking well-known reference books in optimization [1, 2, 3], computing the pseudo-inverse of the Hessian (or any other matrix) when it is singular is extremely uncommon.** We explain why in the following.\\n\\nDuring the practical optimization process (e.g. when using Newton's method), the matrices we need to invert are in fact invertible: it is very rare for an eigenvalue of the Hessian to be numerically zero. So, there is no need to use the pseudo-inverse: when a matrix is invertible, its inverse is equal to its pseudo-inverse.\\n\\nHowever, it is hard to deal with close-to-zero eigenvalues: it is technically possible to invert a matrix with close-to-zero eigenvalues, but it would result in a matrix with very large eigenvalues (leading to exploding numerical values, instabilities of the optimization process, etc.). The pseudo-inverse would be useless to solve this problem, since it is equal to the inverse (whether the eigenvalues are close to zero or not). Therefore, other techniques have been developed to overcome this difficulty (see [1, 2, 3]). A common one is regularization: instead of inverting $\\\\mathbf{A}$, we invert $\\\\mathbf{A}+\\\\epsilon I_P$ (where $I_P$ is the identity matrix).\\n\\nWe have built our own regularization on top of the one presented in [4], for the following reasons:\\n 1. [4] is theoretically well-founded;\\n 2. optimization of neural networks is nonconvex, and [4] provides results that are valid in nonconvex optimization (up to some conditions);\\n 3. [4] uses the order-3 derivative of the loss, to which we have access (thanks to Section 3).\\n\\nThe idea behind [4] (and our regularization) is that it is worth trusting the order-2 Taylor approximation of the loss as long as the cubic term of the Taylor approximation is not too large. Otherwise, it would be better to take smaller training steps.\\n\\nThe reviewer may check the theoretical results in [4]. If the reviewer insists, we can explain why we do not use the pseudo-inverse in appendix. But, since it is (almost) never used in optimization, we believe that it is unnecessary.\\n\\n# References\\n\\n[1] *Linear and Nonlinear Programming*, Luenberger, 2008.\\n\\n[2] *Lectures on Convex Optimization*, Nesterov, 2018.\\n\\n[3] *Numerical Optimization*, Nocedal and Wright, 1999.\\n\\n[4] *Cubic regularization of Newton method and its global performance*, Nesterov and Polyak, 2006.\\n\\n[5] *Foundations of Modern Analysis*, Dieudonne, 1960.\\n\\n[6] *Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints*, Cartis et al., 2020.\\n\\n[7] *Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models*, Birgin et al., 2017.\"}", "{\"comment\": \"We thank again the reviewers for their time and the suggestions that helped us to improve our paper.\\n\\nAlongside the main strengths of paper, which are acknowledged by all the reviewers (well-principled way of computing per-layer learning rates, measuring interactions between layers, efficient computation of quantities related to higher-order derivatives...), we are aware that the empirical evaluation of our **optimization method** is limited. However, the provided empirical evaluation goes beyond synthetic datasets and small neural networks (we even show how to use our method with a 100-layers MLP), which we call a \\\"proof-of-concept\\\". \\n\\nOverall, we think that the **methodological tools** developed in our paper are of interest to the community (as acknowledged by the reviewers), and we have built upon them an optimization method that is applicable in several (non-trivial) settings. This optimization method, although not thoroughly evaluated, helps to show how the tools we developed can be used, and provides practical insights for future research.\"}", "{\"summary\": \"The paper proposes to make second-order optimization computationally tractable by taking a coarse view of high-dimensional parameter spaces. The authors break the Hessian of a deep learning model into blocks, with one block for every pair of layers, and compute one summary scalar per block. This allows for modeling inter-layer interactions, unlike many other approaches to approximate second-order optimization that neglect these terms. The authors propose a cubic-regularized version of their algorithm, and present experimental evidence that inter-layer interactions in deep learning models are non-trivial. Ultimately, the experimental results of the authors' proposed method are mixed.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors present a very interesting idea, and thoroughly motivate the idea. It really is a big short-coming that many related papers on this topic neglect inter-layer interactions when designing optimization algorithms. And finding computationally tractable ways to \\\"summarize\\\" the Hessian is a very strong idea.\\n\\nThe discussion of related work is quite comprehensive and clear.\", \"generally_the_work_feels_quite_thoughtful\": \"considering what are the issues with Newton's method, and how to try to overcome them.\", \"weaknesses\": \"I think the method could potentially be introduced in a more straightforward way, and I want to suggest one way. I think the method could be viewed as a change of variables to a smaller set of local optimization variables. In particular, instead of viewing the loss as a function of general perturbations to all the weight tensors:\\n\\nloss( W_1 + \\u2206W_1, W_2 + \\u2206W_2, ..., W_S + \\u2206W_S)\", \"we_can_view_the_loss_as_as_a_function_of_scalar_parameterized_perturbations_to_each_layer\": \"loss( W_1 - \\u03b7_1 * G_1 , W_2 - \\u03b7_2 * G_2, ..., W_S - \\u03b7_S * G_S)\\t\\t\\t(*)\\n\\nwhere \\u03b7_1, \\u03b7_2, ..., \\u03b7_S are a collection of scalars and G_1, G_2, ..., G_S are the gradients of the loss with respect to each weight tensor, evaluated at the point W_1, W_2, ..., W_S. We can consider (*) to be a loss function with S variables \\u03b7_1, \\u03b7_2, ..., \\u03b7_S. It's then clear that Hessian of this \\\"reduced-dimensionality\\\" loss is an S x S matrix. And we can throw any of a wide range of optimization methods toward solving this local S-dimensional optimization problem.\\n\\nI also think the title could possibly be improved. What about something like \\\"Multi-Tensor Optimization via Second-Order Scalar Summaries\\\"?\\n\\nAlgorithmically, there is a weakness in the method that it only searches in the gradient direction for each tensor. This means the method would miss Shampoo-style [1] changes to the gradient direction which have been found to speed up deep learning training empirically.\\n\\nThe experimental results are mixed and perhaps not terribly promising at the moment. But it's good that you are open and up front about this. I think it's unlikely the broader community would focus on the paper too closely without more thorough experimental results.\\n\\nSome light proof-reading of the writing would be helpful in places. For example line 90--91 \\\"This is typically what is done by Dangel (2023), despite it does not go beyond the second-order derivative.\\\"\\n\\n[1] https://arxiv.org/abs/1802.09568\", \"questions\": \"See weaknesses section. Do the authors agree with this perspective?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer 6Jm1,\\n\\nWe would be delighted to know if you are satisfied with our answer and the changes made to our paper, including:\\n * testing $\\\\bar{\\\\mathbf{H}}$ against $\\\\mathrm{diag}(\\\\bar{\\\\mathbf{H}})$ with our method (see Appendix K); that is, testing the usefulness of off-diagonal coefficients of $\\\\bar{\\\\mathbf{H}}$, that we are able to calculate thanks to our method;\\n * our answer about the relevance of the test accuracy in our setup.\"}", "{\"comment\": [\"Dear Reviewer PhVC,\", \"We would be delighted to know if you are satisfied with the changes made to our paper, including:\", \"the improvement of the quality of the writing (in the entire paper);\", \"the explanation of the notation used for the order-$d$ derivatives and tensor contraction (Section 3 and Appendix A);\", \"the answer given in our rebuttal about the pseudo-inverse.\"]}", "{\"title\": \"Reply to the authors\", \"comment\": \"Thank you for your effort and corrections made to the manuscript, I apologize for the delayed response.\\nI believe that the writing improvements help clarify the results, and merit raising my rating to 5.\\n\\nLet me point out first of all, that my question regarding the pseudo-inverse was not the reason for my initial rating, and perhaps I did not phrase it correctly - my intension was that using SVD, even under poor conditioning ,it's possible to remove the nearly singular eigenvalues (even numerically by simply setting them to 0 in the inverse matrix) and perform the search on the subspace spanned by the non-flat (nearly flat) directions, see refs.[1-8].\\n\\nSecondly, the reason my score is not higher is simply that the proposed algorithm, which is a large portion (if not the main portion) of this work does not show any improvement over standard algorithms, and along with the lack of an expected convergence rate, there is no reason to believe it is particularly useful. Therefore, my main issue is that the focus of the work in its current form is not of sufficient interest to the community to merit publication in ICLR.\", \"references\": \"[1] Golub, G.H., and Van Loan, C.F. 1989, Matrix Computations, 2nd ed. (Baltimore: Johns Hopkins\\nUniversity Press), \\u00a78.3 and Chapter 12.\\n\\n[2] Lawson, C.L., and Hanson, R. 1974, Solving Least Squares Problems (Englewood Cliffs, NJ:\\nPrentice-Hall), Chapter 18.\\n\\n[3] Forsythe, G.E., Malcolm, M.A., and Moler, C.B. 1977, Computer Methods for Mathematical\\nComputations (Englewood Cliffs, NJ: Prentice-Hall), Chapter 9. \\n\\n[4] Wilkinson, J.H., and Reinsch, C. 1971, Linear Algebra, vol. II of Handbook for Automatic Computation (New York: Springer-Verlag), Chapter I.10 by G.H. Golub and C. Reinsch. \\n\\n[5] Dongarra, J.J., et al. 1979, LINPACK User\\u2019s Guide (Philadelphia: S.I.A.M.), Chapter 11. \\n\\n[6] Smith, B.T., et al. 1976, Matrix Eigensystem Routines \\u2014 EISPACK Guide, 2nd ed., vol. 6 of\\nLecture Notes in Computer Science (New York: Springer-Verlag).\\n\\n[7] Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag),\\n\\u00a76.7. \\n\\n[8] Golub, G.H., and Van Loan, C.F. 1989, Matrix Computations, 2nd ed. (Baltimore: Johns Hopkins\\nUniversity Press), \\u00a75.2.6. [5]\"}", "{\"comment\": \"Thank you to the authors for responding to the feedback!\\n\\n(Please note one new grammar error in the sentence after \\\"What are we really looking for?\\\" in 2.3 Motivation.)\"}", "{\"metareview\": \"This paper proposes an optimization algorithm \\u201cNewtonSummary\\u201d, which can be viewed as a middle ground of Newton\\u2019s method and Cauchy\\u2019s steepest descent. By grouping parameters and calculating \\u201csummarized\\u201d high-order derivatives of the loss function, the algorithm implements a computationally inexpensive (but crude) version of Newton\\u2019s method with Nesterov cubic regularization.\\n\\nThe reviewers agreed that the algorithm has some novel elements such as a computationally tractable way to summarize 2nd and 3rd order derivatives of the loss function. Indeed, as pointed out by one of the reviewers during reviewer-AC discussion, the computational method for higher order information presented in the paper can be useful, in particular for interpretability and understanding the landscape of DNNs.\\n\\nHowever, it was unanimously pointed out that the empirical demonstration of the proposed algorithm\\u2019s effectiveness leaves much to be desired. Although the authors claims to provide a preliminary \\u201cproof-of-concept\\u201d, the algorithm is evaluated in a limited range of benchmarks and models (although the authors indeed test it on a 100-layer model), is compared against only a few baselines, and yet does not show clear advantages over existing methods. This makes me question the motivation for using higher-order methods for neural network training\\u2014if gradient-based methods like adam are better, why use this method? Moreover, the paper does not provide any convergence guarantees, which is also a weakness.\\n\\nOverall, my conclusion is that the paper is slightly below the acceptance threshold for ICLR.\", \"additional_comments_on_reviewer_discussion\": \"The summary includes a summary on the discussion of strengths/weaknesses raised by the reviewers. Additionally, Rev Hw9J pointed out that the method could be described in an alternative form and the authors reflected that in Appendix B.\"}", "{\"summary\": \"This work considers building summaries of higher order loss derivatives, like the Hessian and the third-order Tensor, which bucket interactions at the level of layers or some arbitrary partitions, instead of each individual parameter. In particular, by considering a particular contraction (like with the all-ones vector or gradient direction), very compact higher-order summaries can be built (which scales polynomially in the number of layers, instead of parameters). As an application, they use it to derive a layerwise scaling of learning rates, which neatly interpolates between the two extremes of using Newton's method and Cauchy's steepest descent rule. The method is demonstrated on some very simple experimental setups.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is in general well motivated with the need to capture the interactions between parameters in different layers, which is often ignored by block-diagonal methods. This is operationalized in a natural way by studying the Hessian with suitable contractions.\", \"The approach of getting layerwise learning rates through their layerwise grouping is neat. This offers a principled extension to the Cauchy's steepest descent rule.\", \"The method could also find utility in studying the behaviour of feature learning across layers, and thus be used more than just in optimization.\"], \"weaknesses\": [\"The experimental section is quite weak. I understand that the authors themselves pitch it as a proof-of-concept, but I am not so sure about even if you can call it a proof of concept. The experiments are on small datasets like CIFAR, even over there none of the methods get the typical 90% and above accuracy, the test accuracy of their method is much worse than K-FAC.\", \"More fundamentally, it is unclear to me where lies the bigger problem: correcting the curvature across layers, or that within the layers. It is well known that the Hessian tends to have a significant energy on its block diagonals, and thus maybe correcting across the layers, may not possess significantly more information.\", \"Also, even for their method, I am curious what amount of the performance can be explained by simply estimating the scales of the Hessian on the diagonal blocks. In particular, if they instead use diag(\\\\bar{H}), and then use it to get layerwise learning rates, how does that perform? This would form a test bed to showcase how crucial is it to capture cross-layer information.\", \"Besides, I think in the vision setting the Hessian tends to be more homogeneous across the networks as opposed to that in Transformers with language modelling [1]. Hence, I think their approach might be more suited to that setting, and would be interesting to see if it can outperform methods like Adam-Mini [2].\", \"There are very limited baselines considered by the authors. I would have liked to see the Cauchy step size, AdaHessian, and even a block-diagonal quasi Newton method.\", \"The overall runtime cost can be quite large, as there are multiple Hessian vector products. Can the authors do a wall-clock comparison?\"], \"references\": \"[1] Ormaniec, et. al. (2024). What Does It Mean to Be a Transformer? Insights from a Theoretical Hessian Analysis. arXiv preprint arXiv:2410.10986.\\n\\n[2] Zhang, et. al. (2024). Adam-mini: Use fewer learning rates to gain more. arXiv preprint arXiv:2406.16793.\", \"questions\": \"See above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"To provide a complete answer to Reviewer PhVC, we have added a paragraph in Appendix A, explaining thoroughly the implementation of the method presented in Section 3.\\n\\nWe hope that it is now easier to understand how it is possible to compute efficiently the tensors $\\\\mathbf{D}_{\\\\boldsymbol{\\\\theta}}^d(\\\\mathbf{u})$ containing order-$d$ information about the loss.\\n\\nMoreover, we have added some details about the symmetric structure of $\\\\mathbf{D}_{\\\\boldsymbol{\\\\theta}}^d(\\\\mathbf{u})$, which allows us to compute it even more efficiently: we compute and store only $\\\\frac{(S+d-1)!}{d! (S-1)!}$ coefficients instead of $S^d$. For example, with $S = 10$ groups of parameters and $d = 3$ (derivative of order 3), we store only $220$ coefficients instead of $S^d = 1000$.\"}", "{\"comment\": \"We thank Reviewer Hw9J for the review and the insightful remarks and suggestions. We also appreciate that the reviewer acknowledges our motivation and the thinking behind the paper.\\n\\n# Quality of writing\\n\\nWe thank the reviewer for the remark. We have carefully proof-read (again) our paper and made many minor changes (highlighted in blue). We hope that the writing quality is now acceptable.\\n\\n# Derivation of the optimization method\\n\\nThank you for the suggestion! We fully agree with the reviewer: there exist much simpler ways to derive our optimization method, and the one proposed by the reviewer is valid. We have included it in Appendix B, along with another interpretation of our method.\\n\\nAt start, we chose not to put it that way, because we wanted to keep the same notation in Sections 3 and 4, and our notation is more adapted to the mathematical analysis we provide in Appendix F.\\n\\n# Search in the gradient direction for each tensor\", \"the_reviewer_is_right\": \"in practice, we focus on the choice of direction $\\\\mathbf{u} = \\\\mathbf{g}$, and the central matrix of our computation is $\\\\bar{\\\\mathbf{H}}$, which is based on the Hessian.\\n\\nHowever, it is possible to choose $\\\\mathbf{u}$ differently. For instance, one can choose $\\\\mathbf{u} = \\\\mathbf{A} \\\\mathbf{g}$, where $\\\\mathbf{A}$ is the diagonal of the inverse of the Hessian. In short, it is possible to choose $\\\\mathbf{u}$ so that we partially take into account the curvature of the loss before computing $\\\\bar{\\\\mathbf{H}}$.\\n\\nAlso, it is possible to choose differently the matrix: instead of $\\\\bar{\\\\mathbf{H}}$ (the Hessian projected on certain directions) we can use other matrices. For instance, one could use the \\\"square\\\" of the Jacobian (used in the Gauss-Newton algorithm) projected on certain directions.\\n\\nWe do not know if there is a choice of $(\\\\mathbf{u}, \\\\bar{\\\\mathbf{H}})$ that would allow us to recover Shampoo. But it is clear that tuning them would lead us to algorithms different from the one we propose here. (Besides, Shampoo does not use directly the Hessian, so, if we want to recover Shampoo, we have to replace $\\\\bar{\\\\mathbf{H}}$ by a matrix that only uses first-order information).\\n\\n# Title of the paper\\n\\nWe thank the reviewer for the suggestion. The main difficulty is to find a title mentioning both higher-order derivatives and our optimization algorithm, while keeping it short.\"}", "{\"comment\": \"We thank Reviewer 6Jm1 for the review and the insightful remarks and questions. We also appreciate that the reviewer acknowledges our motivation and the relevance of our method (with potential applications beyond optimization).\\n\\n# Do we need cross-layer information?\\n\\n> if they instead use diag(\\\\bar{H}), and then use it to get layerwise learning rates, how does that perform? This would form a test bed to showcase how crucial is it to capture cross-layer information.\\n\\nWe provide such experiments in **Appendix K**.\", \"this_is_indeed_an_crucial_question\": \"do we really need cross-layer information? As mentioned at the end of Section 5, we provide such a study in Appendix K. We have tested our method (using full $\\\\bar{\\\\mathbf{H}}$) against its diagonal version (using only the diagonal of $\\\\bar{\\\\mathbf{H}}$) with LeNet and VGG11'. As the reviewer will notice (see Fig. 6), keeping the off-diagonal coefficients of $\\\\bar{\\\\mathbf{H}}$ leads to better and more stable optimization results. Please note that for a fair comparison with our method, we have tested a wide range of hyperparameters for the diagonal version.\\n\\nAlso, we believe that the reviewer would be interested in the results presented in **Appendix J**. We show how the choice of the partition affects training (we have tested several choices of partition of VGG11').\\n\\n# Inter-layer or intra-layer correction of the curvature?\\n\\n> it is unclear to me where lies the bigger problem: correcting the curvature across layers, or that within the layers.\\n\\nAccording to the experiments performed in Appendix K (see previous answer), it is important to consider inter-layer interactions. The combination of inter-layer and intra-layer interactions has not yet been fully achieved.\\n\\nBut, in theory, our method is flexible enough to incorporate part of the intra-layer curvature. Since it is possible to choose any custom candidate descent direction $-\\\\mathbf{u}\\\\_t$, one may choose a $\\\\mathbf{u}\\\\_t$ different from the gradient $\\\\mathbf{g}\\\\_t$, such as $\\\\mathbf{u}\\\\_t = \\\\mathbf{A} \\\\mathbf{g}_t$, where $\\\\mathbf{A}$ is the diagonal of the inverse of the Hessian. In short, it is possible to choose $\\\\mathbf{u}\\\\_t$ so that we partially take into account the curvature of the loss before computing $\\\\bar{\\\\mathbf{H}}$.\\n\\n# Test accuracy\\n\\nThis is a tough question. \\n\\nWe have decided to focus our experimental evaluation on the optimization process. Thus, we focus on the *training loss* metric and we did not use any of the common regularization techniques (data augmentation, weight decay/penalty, drop-out, SAM, etc.). This may explain the accuracies obtained on the test set.\", \"here_are_the_reasons_of_our_choice\": \"1. the proposed training algorithm is designed as an optimization method (just like the SGD, Newton's method, etc.), and not as a procedure to improve generalization (see Appendix B for the initial derivation of the method). Given that context, we have considered that the most relevant metric is the *training loss* (and not the training accuracy or any metric related to the test set);\\n 2. we know that some optimization methods, such as the SGD, produce good generalization results, even though they were originally designed for optimization. However, this \\\"implicit regularization\\\" phenomenon is not yet fully understood in the case of deep non-linear neural networks. To avoid reporting a metric related to a poorly explained phenomenon, we preferred to stick with the training loss.\\n\\nIf a user is interested in the test loss, we think that our method can be used just like other optimizers with data augmentation, weight decay, etc.\\n\\nBut we are aware that the \\\"gold standard\\\" in classification is the test accuracy, since that is the metric that we really looking to optimize. This is why we provide test accuracies (Appendix I). \\n\\n**Please note that Figure 5 (in Appendix I) illustrates very well that comparing test losses/accuracies can be tricky.** For instance, the test losses and accuracies of VGG11' exhibit different behaviors: at several epochs (see epoch 15), best-performing models are different when considering loss and accuracy; and an exploding test loss may not imply a drop in accuracy. **So, some uncontrolled phenomena are at work. This illustrates the reasons why we focus on the training loss.**\\n\\nHowever, we would understand if the reviewer does not share this view.\\n\\n# Wall-time comparison\\n\\nThe reviewer is perfectly right, such a comparison was missing in the initial version. We have put it in Appendix M in the current version.\\n\\n# Application to transformers\\n\\nWe thank the reviewer for the suggestion! We hope we will be able to implement it and test it soon.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
Ei9KiIzgxK
Synthetic Data is Sufficient for Zero-Shot Visual Generalization from Offline Data
[ "Ahmet H. Güzel", "Jack Parker-Holder", "Ilija Bogunovic" ]
Offline reinforcement learning (RL) offers a promising framework for training agents using pre-collected datasets without the need for further environment interaction. However, policies trained on offline data often struggle to generalise due to limited exposure to diverse states. The complexity of visual data introduces additional challenges such as noise, distractions, and spurious correlations, which can misguide the policy and increase the risk of overfitting if the training data is not sufficiently diverse. Indeed, this makes it challenging to leverage vision-based offline data in training robust agents that can generalize to unseen environments. To solve this problem, we propose a simple approach—generating additional synthetic data. We propose a two-step process, first $augmenting$ the originally collected offline data to improve zero-shot generalization by introducing diversity, then using a diffusion model to $generate$ additional data in latent space. We test our method across both continuous action spaces (Visual D4RL) and discrete action spaces (Procgen), demonstrating that it significantly improves generalization without requiring any algorithmic changes to existing model-free offline RL methods. We show that our method not only increases the diversity of the training data but also significantly reduces the generalization gap at test time while maintaining computational efficiency. We believe this approach could fuel additional progress in generating synthetic data to train more general agents in the future.
[ "Offline Reinforcement Learning", "Generalization", "Data Augmentation", "Synthetic Data Generation" ]
Reject
https://openreview.net/pdf?id=Ei9KiIzgxK
https://openreview.net/forum?id=Ei9KiIzgxK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yxjTaTimuk", "wT3vVakUNi", "wSN7w7hrqr", "voKHAJv9qM", "vlAWVyFN8f", "rVkfkqkjK0", "pDNeU6yK9U", "mlVfDJy752", "kJmATnUm9A", "huaAX1vbzN", "gd4QB0BClb", "byaLbIaLfr", "aR67Zmkle2", "ZH8lis2lSl", "YSzwJ1Jhs1", "Xe6xQdya48", "XEWD2q16Yz", "Qh0fSs9Jz2", "7Ub3dCHU8P", "3o9lCoxz6x" ], "note_type": [ "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1737524055284, 1732048886739, 1732533548037, 1730203946249, 1732051863028, 1730375127761, 1732051313180, 1732536170331, 1732050412487, 1732612196781, 1730541755688, 1732796101621, 1734678531954, 1732536015419, 1732049919822, 1730680980232, 1732536549367, 1733157829439, 1732536049577, 1732558584744 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10462/Authors" ], [ "ICLR.cc/2025/Conference/Submission10462/Reviewer_SgPS" ], [ "ICLR.cc/2025/Conference/Submission10462/Reviewer_SgPS" ], [ "ICLR.cc/2025/Conference/Submission10462/Authors" ], [ "ICLR.cc/2025/Conference/Submission10462/Reviewer_cYiQ" ], [ "ICLR.cc/2025/Conference/Submission10462/Authors" ], [ "ICLR.cc/2025/Conference/Submission10462/Authors" ], [ "ICLR.cc/2025/Conference/Submission10462/Authors" ], [ "ICLR.cc/2025/Conference/Submission10462/Reviewer_cYiQ" ], [ "ICLR.cc/2025/Conference/Submission10462/Reviewer_uuZk" ], [ "ICLR.cc/2025/Conference/Submission10462/Authors" ], [ "ICLR.cc/2025/Conference/Submission10462/Area_Chair_WYdQ" ], [ "ICLR.cc/2025/Conference/Submission10462/Authors" ], [ "ICLR.cc/2025/Conference/Submission10462/Authors" ], [ "ICLR.cc/2025/Conference/Submission10462/Reviewer_zkst" ], [ "ICLR.cc/2025/Conference/Submission10462/Authors" ], [ "ICLR.cc/2025/Conference/Submission10462/Authors" ], [ "ICLR.cc/2025/Conference/Submission10462/Authors" ], [ "ICLR.cc/2025/Conference/Submission10462/Reviewer_zkst" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Global response by the authors\", \"comment\": \"We truly appreciate the careful comments from the reviewers as well as for appreciating the simplicity, efficiency, and creative aspect of our suggested approach. We value the recognition of our efforts to enhance generalizing in offline learning by means of diffusion model-based upsampling combined with data augmentation.\\n\\nWe hope we have addressed all your questions and concerns in the updated paper. If there is anything else we can clarify, please let us know. Otherwise, we would be grateful if you could consider increasing your support for our work with a higher score.\"}", "{\"comment\": \"Thanks for your effort on the response. I believe that comparing pure image augmentation with diffusion-augmentation under the same-size setting, and analyzing the choice of image augmentation in the first stage could enhance the paper. I will maintain my current score.\"}", "{\"summary\": \"The authors propose a novel data augmentation method to enhance generalization in an offline RL setting.\\nBy training a diffusion model on a set of augmented latent features, the model can subsequently generate additional latent data for offline RL training.\\nThe paper demonstrates that the distribution of the augmented data more closely aligns with the evaluation data distribution, resulting in improved generalization performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) The paper proposes a novel approach to using diffusion models to generate new data for image-based RL.\\nAugmenting in the latent space helps avoid the overwhelming computational costs associated with both training and inference when using a diffusion model directly on image level.\\n2) A comprehensive experimental analysis is provided.\\n3) The paper is clearly written and easy to follow.\", \"weaknesses\": \"1) On line 246, I noticed that the _Augmented_ dataset has the same size as the _Baseline_ dataset, whereas the _Augmented Upsampled_ dataset is larger than the _Baseline_ dataset.\\nThis raises the question of whether the performance gain of the _Augmented Upsampled_ dataset over the _Augmented_ dataset is primarily due to the increased amount of augmented data used in training.\\nGiven that the diffusion model is trained on the augmented data, the data it generates may follow a similar distribution to the augmented data.\\nSo, is there a significant difference between using data augmentation to increase the dataset size versus using a diffusion model to generate additional data?\\nConducting an additional experiment where the size of the Augmented dataset is increased to match the Augmented Upsampled dataset would help isolate the potential benefits of the diffusion model's data generation.\\n2) Following the discussion above, the performance of this method might highly depend on the data augmentation used in the first phase.\\nThe types of the data augmentation actually decide the data distribution generated by the trained diffusion model. Have you run any ablation study on this?\\n3) minor: typo on line 137 (invrease).\\ntypo on line 361 (tecnic)\", \"questions\": \"1) On line 150, the types of data augmentation used are listed.\\nCould you please explain why random shift, a common data augmentation used in image-based RL, is not included here?\\n2) On line 169, both the state $s$ and state $s'$ are augmented by a function called Augment.\\nAre they augmented by the same image transformation or a randomly sampled image transformation?\\n3) If I understand correctly, FDD in section 5.5.1 (line 361) refers to including 5% of data that is close to the evaluation distracting dataset.\\nIs it possible that, by incorporating a small amount of data from the Fixed Distraction Dataset along with the diffusion upsampling process, we could achieve relatively good performance even without the initial data augmentation phase?\\nBecause this small amount of data could be enlarged by the diffusion model.\\nIt will beinteresting to test this hypothesis by comparing the performance of the models trained with different percentages of FDD and diffusion upsampling against the full method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response from authors\", \"comment\": \"> Augmented dataset has the same size as the Baseline dataset\\n\\nWe appreciate the reviewer\\u2019s positive feedback on the novelty of our approach, the clarity of our writing, and the comprehensive experimental analysis. Thank you for raising this important question about dataset size and the distribution of upsampled augmented data compared to the original augmented data. Our upsampled dataset extends the augmented data distribution to better align with the testing data, leveraging the broader diversity introduced by upsampling. **While the diffusion model generates data based on the augmented data distribution, it also expands diversity beyond the original distribution, as shown in Figure 5 of the SynthER paper.** Our JS divergence analysis supports this observation, demonstrating that the upsampled data moves closer to the testing data distribution while increasing overall diversity. **At its core, our method integrates visual augmentation with upsampling into a unified approach to improve generalization when additional data is unavailable.** Investigating the impact of dataset size by comparing pure augmentation with our combined method using equal-sized datasets would indeed be a valuable direction for future work. **However, our current focus is on presenting a simple and effective solution for scenarios with limited data, where enhancing diversity through augmentation and upsampling is crucial.**\\n\\n> the performance of this method might highly depend on the data augmentation used in the first phase\\n\\nThank you for pointing out this complementary aspect to the earlier discussion. To improve generalization, we began with all augmentations proposed by RAD [1] and systematically refined them. We eliminated augmentations that caused training instability and iteratively narrowed down to those most impactful on generalization performance. Through this process, we observed that both environments favored similar augmentations, and we fine-tuned their parameters to maximize stability and effectiveness (as detailed in Supplementary Section B.0.3). Our focus was on achieving stable training outcomes while balancing computational constraints, as this work was conducted on a single GPU academic setting. **While we performed ablations during the selection process, we opted not to include an exhaustive set of results in the paper to maintain focus on the simplicity and effectiveness of combining visual augmentation with diffusion-based upsampling for generalization.** We believe this strikes a balance between demonstrating the method\\u2019s impact and avoiding overemphasis on augmentation-specific studies in offline RL. **We agree that a deeper exploration of the interplay between augmentations and diffusion-generated data is a valuable direction for future work.**\\n\\n_[1] Laskin et al. Reinforcement Learning with Augmented Data. NeurIPS 2020_\\n\\n> typos\\n\\nThank you for pointing out the minor typos. We corrected \\\"invrease\\\" (line 137) and \\\"tecnic\\\" (line 361) in the revised version of paper.\\n\\n> Are they augmented by the same image transformation or a randomly sampled image transformation?\\n\\nThank you for asking for the clarification of transformations applied to states. **We realized that we didn\\u2019t add details about that and we updated section B.0.3 in supplementary material.** To clarify, the augmentation function is applied to states s and s\\u2032, which uses independently sampled transformations from the same set of augmentations, each selected with equal probability. Within each state, the same transformation is consistently applied across all images in the stack to preserve temporal and spatial relationships, which is also used in RAD work. This ensures diversity across different states while maintaining structural integrity within each state.\\n\\n> Is it possible that, by incorporating a small amount of data from the Fixed Distraction Dataset along with the diffusion upsampling process?\\n\\nWe thank the reviewer for highlighting this intriguing aspect of our findings in section 5.1.1. Incorporating a small amount of data from the Fixed Distraction Dataset (FDD) alongside the diffusion upsampling process\\u2014and evaluating its performance without the initial data augmentation phase\\u2014is indeed an intriguing hypothesis. Our primary focus was to demonstrate the combination of augmentation and upsampling as a cohesive method, which is why we kept our approach consistent with the augmented and upsampled (ours) pipeline to ensure clarity and alignment with our key message. **That said, we acknowledge the potential value of exploring this direction, which we have identified as an open problem in Section 5.1.1, encouraging further investigation by the research community. If the reviewer believes this analysis would provide significant additional insights and help improve the paper\\u2019s impact, we are happy to include an ablation study in the revised version to examine this hypothesis further.**\"}", "{\"summary\": \"This paper proposes a two-step approach to improve generalization in offline reinforcement learning with visual inputs. By combining data augmentation with diffusion model-based synthetic data generation in latent space, the method enhances training data diversity without modifying existing algorithms. The authors evaluate their approach on both continuous (Visual D4RL) and discrete (Procgen) action spaces, demonstrating significant reduction in generalization gaps while maintaining computational efficiency. Notably, their method is the first to effectively address visual generalization challenges across both continuous and discrete control tasks in offline RL.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents an innovative approach by combining two complementary data augmentation strategies: classic transformations and generative model-based data synthesis. This integration effectively leverages both the reliability of traditional augmentation methods and the diversity potential of generative modeling, providing a more comprehensive solution to the data diversity challenge in offline RL.\\n\\n2. The implementation of diffusion model-based data synthesis in latent space, rather than in high-dimensional observation space, demonstrates significant computational efficiency. This design choice makes the approach more practical and scalable while maintaining effectiveness in generating diverse synthetic data.\\n\\n3. The paper includes insightful analysis using metrics like Jensen-Shannon divergence to quantify the alignment between training and testing distributions\", \"weaknesses\": \"**Weaknesses**\\n\\n1. The discussion and analysis of chosen data augmentation techniques in Section 3.2 lacks sufficient depth. The authors should provide empirical evidence for their augmentation choices and properly reference established techniques from online RL literature, such as DrAC[1], SVEA[2], and the comprehensive survey[3]. The current treatment of augmentation strategies is superficial and fails to leverage valuable insights from prior work.\\n\\n2. The proposed Generalization Performance metric $G_{\\\\text {perf }}=\\\\frac{T_{\\\\text {test }}-B_{\\\\text {test }}}{B_{\\\\text {train }}-B_{\\\\text {test }}}$ needs better justification. A more straightforward approach would be using $T_{\\\\text {train }}/B_{\\\\text {train }}$ to evaluate training effectiveness, while comparing $B_{\\\\text {test }}/B_{\\\\text {train }}$ with $T_{\\\\text {test }}/T_{\\\\text {train }}$ would provide a more natural measure of generalization capabilities.\\n\\n3. The experimental results reveal a critical misalignment with the paper's claimed contribution to \\\"Zero-Shot Visual Generalization.\\\" The performance improvements predominantly stem from enhanced training performance rather than improved generalization ability, as evidenced by the persistent generalization gap between training and testing environments. This fundamental disconnect between the empirical results and the paper's main thesis requires substantial clarification and resolution for the work to be considered acceptable.\\n\\n4. The paper provides insufficient exploration of diffusion model design choices and their impact on performance, lacking crucial ablation studies on model architecture, hyperparameters, and the relationship between latent space dimensionality and generation effectiveness.\\n\\n[1] Automatic Data Augmentation for Generalization in Reinforcement Learning, NeurIPS 2021\\n\\n[2] Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation, NeurIPS 2021\\n\\n[3] A Comprehensive Survey of Data Augmentation in Visual Reinforcement Learning, Arxiv, 2022\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response from authors\", \"comment\": \"> The discussion and analysis of chosen data augmentation techniques in Section 3.2 lacks sufficient depth\\n\\nWe thank the reviewer for recognizing the innovation in our approach, its computational efficiency, and the value of our JS divergence analysis. However, we respectfully disagree with the reviewer\\u2019s concern regarding the lack of sufficient depth in the data augmentation. While we greatly value the contributions of methods like DrAC and SVEA, these methods involve algorithmic changes specific to online RL, **which differ from our objective of providing a simple, algorithm-agnostic solution for offline RL with visual observations.** Our focus is on achieving non-algorithmic changes to offline RL using simple, effective visual augmentations, as demonstrated by RAD [1], with four specific techniques detailed in Supplementary Section B.0.3. These augmentations, combined with diffusion-based upsampling, are central to our goal of enhancing data diversity and generalization without altering the underlying RL algorithm. **To address this further, we expanded the \\u201cRelated Work\\u201d section to clarify these distinctions in more detail and included the missing papers from the three suggested by the reviewer.**\\n\\n_[1] Laskin et al. Reinforcement Learning with Augmented Data. NeurIPS 2020_\\n\\n> The proposed Generalization Performance metric $G_{perf} = \\\\frac{T_{test} - B_{test}}{B_{train} - B_{test}}$\\nneeds better justification.\\n\\nRegarding the generalization metric, its design is grounded in established principles commonly used in reinforcement learning (RL) generalization studies. **See the original Procgen paper [1] Section 2.2. They use the following $R_{norm} = \\\\frac{R - R_{min}}{R_{max} - R_{min}}$.** The idea being to normalize w.r.t. what is possible given the setup. Here $Rmin$ is the lowest possible score, i.e., the test performance of the baseline (with poor generalization), and $Rmax$ is the highest possible score, the train performance on the baseline training data, which is the least diverse and easiest to overfit. **We added this discussion in the paper (section 4.3.1) as we agree it appears plucked from thin air in the current draft, so thank you for flagging.** \\n\\n_[1] Cobbe et al. Leveraging Procedural Generation to Benchmark Reinforcement Learning. ICML 2020_\\n\\n> The experimental results reveal a critical misalignment with the paper's claimed contribution to \\\"Zero-Shot Visual Generalization.\\n\\nThank you for highlighting this critical aspect of our work on zero-shot generalization. We respectfully disagree with the reviewer\\u2019s assertion that the paper does not demonstrate improved zero-shot generalization, as we show this in Procgen (see aggregate performance added to Table 3). Additionally, we present the FDD approach (Table 2), where we observe improvement in the generalization gap for the DMC environments. That said, we understand that the improved performance in the original environment in Table 1 (not necessarily a bad thing!) could lead to confusion. **We are happy to rephrase the title if you have a recommendation.** One proposal could be \\u201cSynthetic Data Enables Training Robust Agents from Offline Data,\\u201d as our agents perform well across a wide range of settings. **We also updated Tables 2 and 3 to include $Test/Train$ and $Train-Test$ results for both environments, aligning with the metrics suggested by the reviewer.** Please let us know if this makes more sense now.\\n\\n> insufficient exploration of diffusion model design choices\\n\\nThank you for highlighting the need for deeper exploration of diffusion model design choices. **Our diffusion model builds directly on SynthER\\u2019s design, which allows us to make consistent comparisons with the V-D4RL benchmark and ensures the reproducibility of our results.** SynthER\\u2019s supplementary material (Section B) already provides extensive ablations on model architecture and denoiser parameters, and our findings closely align with these results. As such, duplicating those ablations in our work would have added redundancy without offering additional insights. Instead, we focused on demonstrating the effectiveness of our combined augmentation and diffusion-based upsampling method in the context of offline RL. **To provide clarity, we have added references to SynthER\\u2019s ablations in the updated Supplementary Section C.3,1. Additionally, we conducted and included a table on latent space size ablations in the updated supplementary material, addressing a gap not explored in SynthER\\u2019s ablation work (Section C and Section F).** While this analysis was initially omitted for brevity, we now provide it to offer further insights into the relationship between latent space dimensionality and performance, complementing SynthER's findings.\"}", "{\"title\": \"Kind Reminder: Request to Evaluate Rebuttal\", \"comment\": \"Dear Reviewer zkst,\\n\\nWe have carefully addressed all the points you raised in my rebuttal, and we believe the additions and explanations provided will help in evaluating the paper further. There is not much time left for us to address your additional comments; hence, we gladly ask you to evaluate our rebuttal at your earliest time, as your insights are important in ensuring a complete assessment of the work. We value the time you spent in this process and would be pleased to offer any more explanations should they be necessary.\\n\\nThank you for your time and consideration. \\n\\nAuthors\"}", "{\"title\": \"Response from authors\", \"comment\": \"Thank you for your review, we are pleased to see you appreciate that our method is a simple approach to achieve improved generalization in two distinct domains. It appears your primary concerns relate to scaling beyond the domains shown, which we believe will be challenging for us to answer concretely in this rebuttal. We note that the domains used have been popular for existing works on data augmentation [1, 2] and are still being used by industry labs in recent publications [3].\", \"see_specific_responses_below\": \"**Cost of diffusion modeling:** in this paper we are focused on the offline RL setting, where there is bottleneck on the amount of available data but not necessarily the amount of time or compute to maximize performance with it. Further, it may be possible when scaling this method to use an open source foundation model which already has world knowledge. Finally, for what it is worth, this work was done with extremely constrained computation resources of a single GPU in an academic lab - yet we were able to get state of the art performance for visual generalization. We think that is a good sign for scalability!\\n\\n**The GAN and VAE comparison is a great point - however - this comparison was already made in the SynthER paper and we do not have any reason to believe it would not hold in a more challenging setting**. Please check out Table 1: https://arxiv.org/pdf/2303.06614, the performance is drastically better for Diffusion which makes sense, it is now the dominant paradigm in generative modeling. What we are showing here is that it may also make it possible to achieve additional visual generalization benefits, if set up correctly with augmenting the data, which was not obvious to us initially and thus we believe is a valuable contribution to the community.\\n\\n**We agree that it is definitely important to make sure the data does not deviate too far from the ground truth distribution.** This was shown in the SynthER paper in Figure 5 and note there has been additional work in this space, such as Policy Guided Diffusion [4], which likely improves our synthetic data generation pipeline. The main contribution in this paper is showing these methods can aid visual generalization, which had not been shown before.\\n\\n_[1] Yarats et al. Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels. ICLR 2021_\\n\\n_[2] Laskin et al. Reinforcement Learning with Augmented Data. NeurIPS 2020_\\n\\n_[3] Ortiz et al. DMC-VB: A Benchmark for Representation Learning for Control with Visual Distractors. NeurIPS 2024 Datasets and Benchmarks Track_\\n\\n_[4] Jackson et al. Policy-Guided Diffusion. NeurIPS 2023 Workshop on Robot Learning_\"}", "{\"comment\": \"I appreciate the authors' thorough response and the improvements made to the paper. The clarifications on data augmentation techniques, generalization metrics, and ablation studies have enhanced the technical presentation. The additional results in Tables 2 and 3 provide better evidence for the method's effectiveness.\\n\\nHowever, while I acknowledge these improvements and will adjust my score upward, I maintain some reservation about the impact-to-complexity ratio. The performance gains, though positive, appear modest given the complexity of implementing and tuning both the data augmentation pipeline and the diffusion model-based synthesis. Therefore, while the paper makes a valuable contribution, I believe it remains at a borderline level for acceptance.\\n\\nThank you for your efforts in addressing the review concerns.\"}", "{\"summary\": \"The paper discusses a novel approach to enhance zero-shot visual generalization in offline reinforcement learning (RL) by integrating data augmentation and diffusion models. The proposed two-step method first augments the original dataset to increase diversity, then employs a diffusion model to generate additional synthetic data in latent space. This approach significantly reduces the generalization gap in both continuous (V-D4RL) and discrete (Procgen) control tasks without altering existing model-free RL algorithms. The results demonstrate improved performance in unseen environments, suggesting that this method could advance the training of more robust agents in offline RL settings.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The two-step approach effectively combines data augmentation and diffusion model-based upsampling, significantly reducing the generalization gap in both continuous (V-D4RL) and discrete (Procgen) control tasks. This leads to improved performance in unseen environments without requiring modifications to existing model-free offline RL algorithms.\\n\\n2. By augmenting the original dataset and generating synthetic data in the latent space, the method broadens the distribution of training data. This increased diversity helps mitigate overfitting to spurious correlations in visual inputs, making the trained agents more robust to variations in unseen scenarios.\\n\\n3. The approach maintains computational efficiency by operating in the latent space rather than the pixel space, allowing for the generation of diverse synthetic data without incurring significant computational costs. This scalability makes it practical for real-world applications in various domains, such as healthcare and robotics.\", \"weaknesses\": \"Although the method shows promising results in benchmarks like V-D4RL and Procgen, these are controlled environments. It\\u2019s unclear how the method would perform in more complex, real-world scenarios where the variety of unseen situations is vastly greater than in benchmark tests.\\n\\nThe effectiveness of the approach depends heavily on specific augmentation techniques like rotation, color jittering, and color cutout. The results may vary significantly if the distribution of unseen environments does not align well with these augmentations.\\n\\nWhile the authors claim the diffusion-based data generation is computationally efficient, training and running diffusion models can be resource-intensive. This could be a bottleneck for scaling up the approach to larger datasets or high-resolution visual inputs.\\n\\nThe paper focuses on a two-step process (data augmentation and diffusion model-based upsampling) but does not explore or compare with other generative models (e.g., GANs, VAEs) that could also potentially increase diversity and improve generalization.\\n\\nWhile augmentation and synthetic data help generalization, there is a risk that the model may overfit to artificially generated diversity, especially if this data diverges from real-world test distributions.\", \"questions\": \"How does the proposed approach handle significantly different visual distributions in real-world applications (e.g., new lighting conditions or object appearances)?\\n\\nWhat are the specific computational costs associated with diffusion model-based upsampling, especially when scaled to larger datasets or higher-resolution visual inputs?\\n\\nHas the performance of the approach been tested against other generative methods, such as GANs or VAEs, to assess if they could offer similar improvements with potentially lower computational overhead?\\n\\nHow does the choice of augmentation techniques affect generalization across different types of environments? Would different augmentations be needed for different application domains?\\n\\nCould there be an overfitting risk associated with heavy reliance on synthetic data? How does the method mitigate this, if at all?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response from the authors\", \"comment\": \"We appreciate your careful review and for noting our improvements as well as for raising your score. Your comments have been quite helpful in directing important clarifications and additions, including those seen in Tables 2 and 3.\\n\\nFor impact-to-complexity ratio, to the best of our knowledge, this is the first work to effectively implement this practical method\\u00a0in two different kinds of offline RL environments\\u2014one with continuous action spaces\\u00a0and one with discrete action spaces. By demonstrating its effectiveness across these settings, we believe our method provides a strong starting point for the research community, aiming to reduce the complexity of selecting augmentation strategies and tuning hyperparameters and making it easier for others to adopt and build upon this approach.\"}", "{\"metareview\": \"Summary: This paper investigates a novel approach to improving generalization in offline visual reinforcement learning (RL), focusing on the integration of data augmentation and latent-space diffusion models. Unlike existing methods, which rely solely on augmentation or specific model modifications, the proposed two-stage method first enhances the diversity of the training data through data augmentation and then employs a diffusion model to generate synthetic transitions in latent space. This strategy addresses visual generalization challenges without modifying existing model-free RL algorithms. The approach is evaluated on two distinct benchmarks: V-D4RL (a continuous control task) and Offline ProcGen (a discrete control task). Empirical results demonstrate that combining data augmentation with latent-space upsampling significantly reduces the generalization gap, leading to improved performance in previously unseen environments.\", \"strengths_and_weaknesses\": \"The reviewers generally recognize that this paper addresses a crucial problem in offline visual RL \\u2013 the generalization to unseen environments. They also appreciate the simplicity and strong performance of the proposed approach, as well as its independence from specific RL algorithms, making it widely applicable to various offline RL models.\\n\\nHowever, they express reservations about the significance of the findings and their practical usefulness. Specifically, the method combines classic data augmentation with diffusion-based data synthesis from SynthER in a straightforward two-stage pipeline. The method's reliance on specific augmentations, along with the need for tuning data augmentation and diffusion model parameters, limits its applicability across diverse domains. Despite promising results in benchmark environments like V-D4RL and Procgen, questions remain about the method's performance in more complex real-world scenarios, where the variety of unseen situations is much greater. There is also a risk of overfitting to synthetic data, especially if the generated data does not align with real-world distributions. Additionally, despite claims of computational efficiency, training and running diffusion models can be resource-intensive, which could hinder scaling the approach to larger datasets or high-resolution inputs. Furthermore, the paper lacks a thorough analysis of diffusion model design choices, such as architecture, hyperparameters, and latent space dimensionality, and their impact on performance, instead largely referencing prior work SynthER. The experimental scope is also seen as too narrow, as the paper only includes DrQ and CQL, and expanding the experiments to include additional methods like IQL, TD3+BC, and EDAC (similar to SynthER) would provide a more comprehensive evaluation.\\n\\nThe authors addressed some of these points during the discussion phase. However, the reviewers remained unconvinced and were not championing the paper. While the study's findings are compelling and highlight the effectiveness of data augmentation and synthesis, there is insufficient evaluation to demonstrate the method's strengths in real-world scenarios. This limits the support for the paper's strong claim that \\\"Synthetic Data is Sufficient for Zero-Shot Visual Generalization from Offline Data.\\\"\\n\\nTherefore, the paper is not ready for this ICLR. I encourage the authors to continue this line of work for future submission.\", \"additional_comments_on_reviewer_discussion\": \"The current recommendation is based on the identified weaknesses, particularly the lack of convincing evidence for the significance and practical usefulness of the proposed method, as well as the absence of a thorough analysis of the design choices behind it.\"}", "{\"title\": \"Kind Reminder: Request to Evaluate Rebuttal\", \"comment\": \"Dear Reviewer uuZk,\\n\\nWe have carefully addressed all the points you raised in my rebuttal, and we believe the additions and explanations provided will help in evaluating the paper further. There is not much time left for us to address your additional comments; hence, we gladly ask you to evaluate our rebuttal at your earliest time, as your insights are important in ensuring a complete assessment of the work. We value the time you spent in this process and would be pleased to offer any more explanations should they be necessary.\\n\\nThank you for your time and consideration. \\n\\nAuthors\"}", "{\"title\": \"Response from authors\", \"comment\": \"> the method requires tuning of the data augmentation parameters, which limits it's applicability.\\n\\nWe sincerely thank you for your valuable feedback and are glad you found our method \\\"simple and easy to understand.\\\" We acknowledge that tuning data augmentation techniques and diffusion model parameters can be challenging, as noted in our limitations discussion in Section 7. To address this, we used all augmentations from RAD [1] to systematically narrow down the augmentation options to reduce computational cost for effectiveness and training stability. For diffusion model parameters, we started with hyperparameters proposed in the SynthER paper that led to a proper balance between generalization and overfitting. **To mitigate the time-consuming nature of tuning, we employed JS divergence analysis and distribution visualizations to assess alignment between the upsampled and baseline datasets.** Specifically, this was achieved by systematically varying the size of the denoiser network and the number of training steps, allowing us to identify configurations that produced the best alignment. This approach allowed us to reduce the need for repeated RL training while efficiently improving data diversity and generalization. **We have updated Supplementary Section B.0.3 to include a detailed explanation.**\\n\\n_[1] Laskin et al. Reinforcement learning with augmented data. 2020_\\n\\n> The experiments only include DrQ and CQL. Since this paper deals with extending the data and can be applied to many different methods, it would make the method more compelling if there were more methods like in SynthER, e.g. IQL, TD3+BC, EDAC.\\n\\nWe selected DrQ+BC and CQL to align with the benchmark datasets from V-D4RL and Offline Procgen (Lu et al., 2023a; Mediratta et al., 2024), respectively, to ensure fair comparisons with existing results. DrQ+BC was chosen because its authors highlighted generalization challenges for model-free algorithms on distracting datasets, providing an opportunity to demonstrate how our method effectively addresses these issues. CQL was selected due to its underperformance in offline generalization tasks compared to other algorithms, making it an ideal case to showcase the potential of our method. Note that DrQ+BC is largely the same as TD3+BC but for visual observations. While additional algorithms like IQL and EDAC could be explored in future work, **our focus was on demonstrating that our method is algorithm-agnostic and applicable across diverse environments, including both continuous and discrete action spaces, rather than comparing the relative performance of various algorithms.**\\n\\n> Do I understand correctly that your 'Upsampled' method is akin to SynthER? If that's so, I would put that in parenthesis. If not, could you provide that comparison?\\n\\nThank you for the suggestion. In the Method Section (Subsection 3.1), we have updated the \\\"Upsampling with Diffusion Models\\\" item to explicitly reference SynthER as the method we employed for upsampling.\\n\\n> Response to \\\"Writing\\\" \\n\\nThank you for your valuable feedback on the writing and figure presentation. Regarding Figures 4, 5, and 6, we aimed to balance readability and organization by combining multiple charts into single figures to effectively summarize the data. While we recognize the figures are somewhat large, this approach minimizes disruption to the paper's structure. For the heatmaps, we chose this format for concise and quick visual interpretation. Although adding exact values on the squares could provide additional information, it reduced visual clarity due to interference with the color scheme. We opted for heatmaps instead of bar plots to maintain readability but appreciate the suggestion and will explore alternative formats, such as annotated heatmaps, in future work. Finally, we corrected the spelling of \\\"tecnic\\\" to \\\"technique\\\" in the revised version and are glad you liked the original phrasing.\"}", "{\"summary\": \"This paper proposes a novel two-stage method to improve generalization in offline visual RL. The method first introduces data augmentations, then trains a latent-space diffusion model to generate new transitions.\\n\\nThe method is tested on V-D4RL and Offline ProcGen. The experiments show that augmentation and upsampling together greatly improve generalization performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The method is simple and easy to understand;\", \"The proposed method improves generalization performance of offline visual RL methods;\", \"The method also yields an additional improvement when given a small subset of data with distractions;\", \"The method only changes the dataset and therefore does not depend on the particular RL algorithm, so in theory it can be used to improve any offline RL algorithm;\"], \"weaknesses\": [\"As authors listed, the method requires tuning of the data augmentation parameters, which limits it's applicability.\", \"The experiments only include DrQ and CQL. Since this paper deals with extending the data and can be applied to many different methods, it would make the method more compelling if there were more methods like in [SynthER](https://arxiv.org/abs/2303.06614), e.g. IQL, TD3+BC, EDAC.\", \"###### Writing\", \"Figures 4, 5, 6 are too large\", \"JS divergence heatmaps in the figures throughout the paper are not very informative. In figure 1 b and 6b, I think it should just be a bar plot, heatmaps with just 4 values seem unnecessary. In figures, 4 and 5, to make it more informative, I'd put the exact values on top of the squares;\", \"361 tecnic -- technique? Although it's incorrect, I like this spelling.\"], \"questions\": [\"Do I understand correctly that your 'Upsampled' method is akin to [SynthER](https://arxiv.org/abs/2303.06614)? If that's so, I would put that in parenthesis. If not, could you provide that comparison?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response from authors\", \"comment\": \"We appreciate your suggestion and review of our responses. As mentioned in our initial response, **our focus is on synthetic data generation rather than a thorough investigation of augmentation techniques. However, we emphasize that our method combines visual augmentation with upsampling to efficiently increase diversity and generalization in data-limited settings.**\\n\\nAs discussed earlier, we methodically refined the augmentation decisions for stability and efficacy. Our JS divergence study demonstrates how the upsampled data better aligns with the testing distribution while improving diversity. We will consider your insightful recommendation to _compare pure image augmentation with diffusion-augmentation under the same-size setting and to analyze the choice of image augmentation_ in future extension work\"}", "{\"title\": \"Second Follow-Up on Rebuttal\", \"comment\": \"Dear Reviewer uuZk,\\n\\nAs the December 2nd midnight AoE deadline for questions approaches, we kindly request your review of our rebuttal. We believe we have addressed all your points and hope the clarifications meet your expectations. If so, we would greatly appreciate it if you could reflect this in your evaluation score. Thank you for your time and consideration, and please let us know if further clarification is needed.\\n\\nBest regards,\\nAuthors\"}", "{\"title\": \"Kind Reminder: Request to Evaluate Rebuttal\", \"comment\": \"Dear Reviewer cYiQ,\\n\\nWe have carefully addressed all the points you raised in my rebuttal, and we believe the additions and explanations provided will help in evaluating the paper further. There is not much time left for us to address your additional comments; hence, we gladly ask you to evaluate our rebuttal at your earliest time, as your insights are important in ensuring a complete assessment of the work. We value the time you spent in this process and would be pleased to offer any more explanations should they be necessary.\\n\\nThank you for your time and consideration. \\n\\nAuthors\"}", "{\"comment\": \"Thank you for your response! I choose to keep my score unchanged at this time.\"}" ] }
EhweLJiYi5
LCEN: A Novel Feature Selection Algorithm for Nonlinear, Interpretable Machine Learning Models
[ "Pedro Seber", "Richard Braatz" ]
Interpretable models can have advantages over black-box models, and interpretability is essential for the application of machine learning in critical settings, such as aviation or medicine. LASSO and elastic net, the most commonly used interpretable methods, are limited to linear predictions and have poor feature selection capabilities. Other important interpretable methods, such as tree-based or generalized additive models, are nonlinear but have limited performance in some scenarios. In this work, we introduce the LASSO-Clip-EN (LCEN) algorithm for the construction of nonlinear, interpretable machine learning models. LCEN is tested on a wide variety of artificial and empirical datasets, frequently creating more accurate, sparser models than other models, including those for building sparse, nonlinear models. LCEN is robust against many issues typically present in datasets and modeling, including noise, multicollinearity, data scarcity, and hyperparameter variance. LCEN is also able to rediscover multiple physical laws from empirical data and, for processes with no known physical laws, LCEN achieves better results than many other dense and sparse methods -- including using 10.8-fold fewer features than dense methods and 8.1-fold fewer features than EN on one dataset, and is comparable to or better than ANNs on multiple datasets.
[ "Machine Learning", "Feature Selection", "Elastic Net", "Interpretable Machine Learning", "Interpretability", "Applications of Machine Learning", "Applied Machine Learning" ]
Reject
https://openreview.net/pdf?id=EhweLJiYi5
https://openreview.net/forum?id=EhweLJiYi5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yge7Bl9NeB", "ybbPoeAMCd", "vV7advVCVf", "teQksEiLCc", "pGhmIYThst", "nbZopIDHEh", "md12ZcoAER", "ksqRg4YJPr", "jppSuL1nk4", "gN02IKpg8A", "bqm5isoVK9", "ZCdLdDBaQq", "VA5C4iHuxM", "UiA2ujK5Io", "TVLyOKShwd", "RbTLfEPQMH", "RCLFQWdMdl", "QpWIsnXJ5L", "OavLdn9ODZ", "MmtFc5RUUH", "KUy0Q8fhJA", "IND8fSwOKB", "HtkIySrOAR", "HbaqqgqIMt", "FhPSd2YiDU", "DzAfkjJvCP", "DfbuPHS5dq", "DPD8dK3nFF", "BKqOjjIPRJ", "9o4FCqo4KU", "71J7bjlmYM", "6mKwNTOzUc", "50bJHfmclA", "3qWMB2pxri", "2MPLa3M7g6", "1lGojipzjA", "1bLriLRksc", "1KI08t7Ady", "09yd6X1kW4" ], "note_type": [ "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732775568085, 1732252483181, 1732658746816, 1737523704439, 1732757261976, 1732516092066, 1732253350345, 1732453664807, 1732252910351, 1732253090309, 1732252641131, 1732574510575, 1731472913151, 1732253544519, 1732957169410, 1732253243329, 1732545061552, 1732775910832, 1732252863030, 1732512421197, 1732757293472, 1732992904564, 1732252373575, 1732658791827, 1732253901162, 1732512362670, 1732642804509, 1732253151760, 1732252493227, 1732252583578, 1729973459889, 1734647262078, 1730701425096, 1729370514278, 1732253413660, 1732253592439, 1732981572890, 1730019845989, 1732570486333 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Area_Chair_YDdQ" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Reviewer_Ko84" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Reviewer_2thw" ], [ "ICLR.cc/2025/Conference/Submission5405/Area_Chair_YDdQ" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Reviewer_knkT" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Reviewer_Ko84" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Reviewer_GMaV" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Reviewer_knkT" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Reviewer_GMaV" ], [ "ICLR.cc/2025/Conference/Submission5405/Area_Chair_YDdQ" ], [ "ICLR.cc/2025/Conference/Submission5405/Reviewer_2thw" ], [ "ICLR.cc/2025/Conference/Submission5405/Reviewer_Ko84" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Authors" ], [ "ICLR.cc/2025/Conference/Submission5405/Reviewer_Ko84" ], [ "ICLR.cc/2025/Conference/Submission5405/Reviewer_knkT" ], [ "ICLR.cc/2025/Conference/Submission5405/Reviewer_GMaV" ] ], "structured_content_str": [ "{\"title\": \"Author Response to the Comment of Reviewer Ko84\", \"comment\": \"> Thank you for your response and clarification. While I understand your perspective, I still believe that incorporating symbolic learning into the experiment comparison would enhance the paper by providing insights into nonlinear term explanations.\", \"author_response\": \"thank you for the continued engagement. If you have any other suggestions, concerns, or criticisms, please let us know.\"}", "{\"comment\": \"Questions:\\n> [The definition of Sparsity] In the second paragraph of the introduction section, the author mentioned that: \\\"It should be noted that nonlinear models may also be made sparse, and even interpretable, as described later in this section and the rest of this work.\\\" The definition of sparse here seems not to be clear to me. For the decision trees, it might be the number of leaves, for the additive models, it might be the number of non-zero coefficient terms, and for the non linear models, it is not clear to me what the definition of sparsity here means. Are you referring to the number of non-zero terms or the number of features that is used to construct the model? Different definitions might lead to different conclusions to your model design and experiment analysis.\", \"author_response\": \"LCEN does not directly have any verification step. The revamped experiments with artificial data show that [...]. On the other hand, most of the tasks in Section 3.2 (other than those summarized in Table 3) are real datasets from processes with unknown physical laws. In this scenario, it is impossible to determine what features should have been selected and what should have been discarded by the model, as the true and false features are not known in the first place. We have shown the number of features selected (particularly in the \\u201cDiesel Freezing Point\\u201d dataset, which contains more features than samples) to highlight that LCEN can build sparse yet accurate models.\\nWe further highlight in the main body that, for the \\u201cDiesel Freezing Point\\u201d dataset (Table 4), we used different cutoff values to simulate the following scenario: \\\"An end user could prioritize creating very sparse models, even at the expense of increasing these models' MSEs. To simulate such a scenario, the LCEN cutoff hyperparameter was increased from the value that minimizes the validation MSE to create sparser models.\\\" As mentioned above, there is no way to know what the real physics is for this task, but validation- and test-set MSEs, together with any sparsity preferences or constraints, should guide the end users.\", \"title\": \"Author Response to the Review of Reviewer Ko84 Part 2/3\"}", "{\"title\": \"Author Response to the Comment of Reviewer GMaV\", \"comment\": \"> I really appreciate the authors' effort in revising the paragraphs and addressing my concerns. The readability of the paper has improved noticeably, which is commendable. Nevertheless, I find the response to concerns regarding the novelty of the work to be insufficient. This issue has also been shared by other reviewers as well. Therefore, I am maintaining my original score.\", \"author_response\": \"thank you for your comment. We would like to highlight that one of the reviewers, Reviewer knkT, considered that the originality of this work was a strong point precisely because the combination used in LCEN is novel, even though each component is not.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Author Response to the Comment of Reviewer knkT Part 1/2\", \"comment\": \"> Dear Authors,\\n\\n> Thank you for your thorough and detailed response to my review, as well as the extensive revisions to your submission. The quality of the paper has improved significantly through these changes. I particularly appreciate the following:\\n\\n> The more comprehensive experimental evaluation, especially a) including high-dimensional settings, b) adding comparisons with non-convex methods (SCAD, MCP), c) including a debiasing approach, d) the new artificial data experiments demonstrating LCEN's performance across varying n/p ratios\\n\\n> The clearer explanations of design choices and hyperparameter selection\\n\\n> The expanded related work section and better contextualization within the literature\", \"author_response\": \"we have expanded the limitations to state that one of the limitations of LCEN is \\\"that the feature expansion algorithm is better suited to numerical data over image or text data\\\" and that users in scenarios \\\"with non-numerical data types\\\" may prefer to use other methods.\"}", "{\"comment\": \"Dear reviewers,\\n\\nThe authors have provided individual responses to your reviews. Can you acknowledge you have read them, and comment on them as necessary? The discussion will come to a close very soon now:\\n- Nov 26: Last day for reviewers to ask questions to authors.\\n- Nov 27: Last day for authors to respond to reviewers.\\n\\nYour AC\"}", "{\"title\": \"Author Response to the Review of Reviewer knkT Part 6/7\", \"comment\": \"> Similarly, it should be highlighted that in general use cases, the complex non-linear feature transformations hinder meaningful interpretability. If more than a handful transformed features are selected, including potentially higher-order interaction terms, polynomial expansions and fixed non-linear transforms.\\nAND\\n> 3. As touched on in the previous point, the paper also claims to yield an interpretable model. However, the interpretability is strongly impaired through the use of the deterministic feature transformations and interaction effects. If the model is not extremely sparse, any meaningful interpretability is quickly lost in the presence of feature transformations such as $\\\\log(X_1)/X_1^2$ or $X_1^3 \\\\cdot X_2^2$.\", \"author_response\": \"the noise level equals mean(added noise / y without noise). We have added this definition to the main text.\", \"questions\": \"> The following list contains both questions (Q) as well as suggestions (S). More questions can be found under \\u201cWeaknesses\\u201d, points 1, A-D.\\n\\n> (Q) The algorithm does not clearly state what is meant by \\u201cscaled training data\\u201d in line 191. I assume this is supposed to be the scaled expanded data set using the transformation hyperparameters from the lasso step except those transformed features removed by the lasso-clip step? Somewhat confusingly, in the cross-validation part of the EN step, all feature expansions explicitly only happen temporarily and within the loop but not outside of it, so it should be clarified which scaled training data line 191 refers to.\", \"regarding_the_feature_expansion_within_the_en_loop\": \"this expansion could have been done outside the loop, as the degree and lag hyperparameters have been fixed before. The implementation occurs in this way so that the LASSO function can be reused for EN (by simply setting potential $L_1$ ratios $\\\\ne 1$). To clarify the algorithm, we add that the clip step only records what features should be removed, and that the actual removal happens within each loop after the expansion of features.\\n\\n> (Q) The noise is given as a percentage value (of what?) instead of the standard deviation of the simulated additive noise. The meaning of this percentage should either be clearly stated, or the standard deviation used in this setting simply stated instead.\"}", "{\"title\": \"Reply to Author Response\", \"comment\": \"Thank you for the response and modification on the paper. I still have some questions and concerns about your paper:\\n\\n**[The definition of noise]** In your previous response, you mentioned the noise level definition, but I still didn't find the paper on how you define the noise. The distribution of noise directly impacts the generalizability of the model. Gaussian noise, uniform noise, or domain-specific noise can all lead to different challenges in feature selection and model performance. Understanding this is critical for reproducibility and for assessing LCEN's robustness.\\n\\n**[Model Comparison]** I still believe that the model comparison is unfair, especially in the black-box model since the models are given into too much unnecessary information, which does harm to the model construction. Just take a simple example of Random Forest. As an example, if you are taking the sqrt parameter as the selection of feature for each internal node construction for a dataset with 10 useful features and 90 useless features, you see that the probability for choosing the useful features is different (as you have 10 feature candidates from 100 each time), with you having 10 useful features and 26 useless features (as you have 6 feature candidates from 36 features each time). Therefore, I think the feature selection (either forward/backward or any kinds of feature selection) should be added to the paper so that the argument in the paper is clear, and also it would be a fair comparison in the number of features for comparisons.\\n\\n**[Interpretability Concern]** In the reply, you mentioned that your explanation would be similar to how SHAP and LIME are doing to show the contribution to the model, but SHAP and LIME are all post-hoc explanation methods, which means that you are not selecting the feature during the process/during the explanation process. In contrast, LCEN claims to build interpretability directly into the model by performing feature selection and constructing sparse, interpretable equations. Therefore, equating LCEN\\u2019s interpretability to that of SHAP and LIME could be misleading. \\n\\nAlso, this definition of the sparsity might also be hard to incorporate with your definition of interpretability since the complexity introduced by these transformations might even rival that of black-box models, making the model difficult for humans to interpret, despite its sparse structure. I think reviewer knkT also points out that interactive terms are harder to understand within the model. Feature transformations and combinations, such as interaction terms or nonlinear transformations, may improve accuracy but inherently reduce the model's transparency. Without a clear mechanism for explaining these transformations, the model's interpretability could rival that of black-box models. The relationship among sparsity, accuracy (MSE), and interpretability might need to be enhanced within the paper or clearer stated.\\n\\nFinally, symbolic learning focuses on deriving interpretable equations and relationships, aligning closely with LCEN's stated goal of interpretability. Including symbolic learning in the discussion would provide a more complete comparison and strengthen the paper's contribution to the field. While the paper shows promise and formalizes the feature selection procedure, these concerns, particularly around interpretability and fair model comparison, need to be addressed for the paper to fully meet its objectives. I look forward to seeing these aspects clarified and improved.\"}", "{\"title\": \"Author Response to the Review of Reviewer knkT Part 2/7\", \"comment\": \"C: Why does the final step also include a hard-thresholding step? Typically, hard-thresholding after the lasso (or equally elastic net) improves support recovery but slightly harms estimation error. Hence, the literature revolving around sequential lasso estimation with hard thresholding (Zhou, 2009;2010;2023, van de Geer et al., 2011) uses this approach only in the first step to identify the relevant features, and in the final step fits an OLS model on the reduced feature set to counteract the shrinkage bias introduced by L1 and L2 regularization in the coefficients (van de Geer et al., 2011, Belloni and Chernozhukov, 2013). Such a debiasing step at the end is missing from LCEN, whose goal decidedly is not only to select the correct features but also produce the correct coefficients.\", \"author_response\": \"we have reformed the experiments with artificial data to better show when LCEN can recover the true features of different datasets. Regarding the coefficient magnitudes and the clip step, note that the clip step is executed on the scaled coefficients, so these coefficients are supposed to have similar magnitudes (if they are from true features) independently of the data's values, and the coefficients from false features tend to have lower magnitudes, which explains why the clip step is effective. We have added this clarification to the Methods section. We agree that formal theoretical foundations would improve this work, but we believe that such results are beyond the scope of this work, as we preferred to validate the performance of LCEN empirically with a variety of scientific and non-scientific tasks.\", \"d\": \"Why does the first step use a different regularizer (lasso) than the second step (elastic net)? I suspect this will produce some theoretical inaccuracies, as parts of the hyperparameters were tuned toward another objective than the others.\"}", "{\"title\": \"Author Response to the Review of Reviewer knkT Part 3/7\", \"comment\": \"> I think for LCEN to be properly presented and analyzed for the broader machine learning audience, answering questions roughly similar to the given references is expected for an extension/variation of the multi-stage thresholded lasso (Zhao, 2009). Under which conditions can LCEN select the true transformed features? Under which conditions is the estimator consistent? How does the algorithm\\u2019s runtime and complexity scale with increasing n and p and \\u201cdegree\\u201d, and at which point does it become infeasible on a CPU? How does it perform on varied tasks with different amounts of sparsity and ratios of samples to features (in particular, experiments in a high-dimensional setting are missing where $p \\\\gg n$ - however these are usually most important sparse regression)?\", \"author_response\": \"even though the relativistic energy example contains only two true features to be estimated, this problem was treated as one whose true features were unknown a priori. Thus, degree values between 1 and 6 (inclusive) were tested, leading to expanded datasets with {8, 22, 42, 68, 100, 138} features respectively. We have added this information to the main text. Although the number of samples was always higher than the number of expanded features, there is significant potential for an algorithm to select false features. As highlighted in the main text, LCEN always selects both true features and none of the false features at all noise levels tested. The experiments with datasets generated by processes with known physical laws (summarized in Table 3) follow a very similar pattern: the CARMENES star data was tested with degree values between 1 and 10 (inclusive), leading to an expanded dataset with up to 350 features, which is larger than the number of samples (293). The Kepler's 3rd Law datasets were tested with degree values between 1 and 3 (inclusive), leading to an expanded dataset with up to 18 features, also larger than the number of samples (6 or 8). As highlighted in Table 3, LCEN always selected the true feature and did not select any of the false features in all of these datasets. We have also added this information to the main text for clarification.\"}", "{\"title\": \"Author Response to the Review of Reviewer GMaV Part 2/2\", \"comment\": \"> Given the current formulation of the algorithm, it makes me wonder why the author has chosen to perform the feature selection with LASSO and EN, but not other algorithms such as Ridge Regression and many others. It could easily lead to a family of algorithms that combines the feature selection for multiple algorithms. More explanation and experimental results on the effect of switching underlying algorithm components could be beneficial to this paper.\", \"author_response\": \"as mentioned above, we have significantly expanded on the reasons behind our choices in the Methods section. As highlighted by our ablation experiments, the LASSO-Clip-EN algorithm is the only combination that combines high accuracy, high sparsity, and a low runtime. The LASSO-Clip, EN-Clip, and LASSO-EN algorithms tend to have lower accuracy than LCEN, and the EN-Clip algorithm is much slower (due to the EN step, not due to the Clip step). The LASSO-Clip-LASSO tends to have slightly lower accuracy than LCEN, but a faster runtime. The EN-Clip-EN tends to have similar accuracy to LCEN, but it has a much lower sparsity and a much higher runtime.\\n\\nThe combinations that start with EN are slower than LCEN because EN has a greater number of hyperparameter combinations to be tested, and these combinations are tested with a higher number of features (as the full expanded feature set has not been subject to any selection via the LASSO and Clip steps). These combinations are also less sparse because the L$_1$ and L$_2$ norms compete during EN regularization, and a combination that prioritizes the L$_2$ norm may have a lower cross-validation MSE. Beginning with the LASSO thus increases the algorithm's sparsity and speed at no cost to its accuracy. We disagree that using Ridge regression could be a viable choice, as Ridge regression (and all L$_2$-based regularizations) does not perform feature selection.\"}", "{\"comment\": \"Thank you very much for your responses, and I appreciate your contribution to the literature on high-dimensional statistics. However, after carefully reviewing your comments, I regret that my concerns remain unresolved. A more detailed and comprehensive comparison of the proposed method with the existing literature, both empirically and theoretically, is essential. I strongly encourage you to thoroughly address my comments and refine the work before considering submission to another journal or conference, should ICLR decide to decline it.\"}", "{\"title\": \"authors - reviewers discussion open until November 26 at 11:59pm AoE\", \"comment\": \"Dear authors & reviewers,\\n\\nThe reviews for the paper should be now visible to both authors and reviewers. The discussion is open until November 26 at 11:59pm AoE.\\n\\nYour AC\"}", "{\"title\": \"Author Response to the Review of Reviewer 2thw Part 1/2\", \"comment\": \"Thank you for your review. Our response to your comments are as follows:\", \"strengths\": \"> Both high-dimensional statistics and interpretable machine learning are fundamental tasks.\", \"author_response\": \"we have reformed the experiments with artificial data to give further insight on the performance of LCEN relative to other models. Although we agree that theoretical foundations would be valuable, that is not possible due to space limitations, such results are beyond the scope of this work, which is to thoroughly validate the performance of LCEN empirically with a variety of scientific and non-scientific tasks. We have also added a few previous works to the Introduction that proved desirable theoretical properties of the thresholded LASSO (a LASSO-Clip model), which is related to our algorithm. Lastly, we have expanded the Methods section to include the rationale behind our design choices and to highlight the ablation tests that had been done in the Appendix.\", \"weaknesses\": \"> 1. The proposed method offers minimal novelty. The clip step is essentially a hard-thresholding step. To remove irrelevant variables, adaptive lasso (Zou 2006) or adaptive elastic (Zou and Zhang 2008) net could be used instead, as they function as two-step methods that impose higher weights in the second step, penalizing features with smaller coefficients identified in the first step. These approaches are more flexible and robust than a straightforward hard-thresholding step. The combination of lasso with various thresholding approaches is not a surprising idea.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for the detailed clarification and additional revisions. Here is my response to the main points:\\n\\n>We can provide these numbers or ratios of these numbers. To clarify your comment, are you looking for the smallest nonzero coefficient before thresholding or after thresholding? Are you looking for a ratio of that smallest nonzero coefficient with the thresholding (cutoff hyperparameter) used?\\n\\nI was not looking for additional empirical results (before thresholding), my goal was to remark that re-scaling the coefficients is not enough to establish \\u201cuniversally\\u201d well-working threshold values (referencing your statement \\u201cTraditional values of the cutoff hyperparameter are 0.001 to 0.04\\u201d). I think cross-validating the threshold parameter, as suggested by the authors, is a valid procedure to determine the threshold in a task-dependent manner.\\n\\n>we have expanded the limitations to state that one of the limitations of LCEN is \\\"that the feature expansion algorithm is better suited to numerical data over image or text data\\\" and that users in scenarios \\\"with non-numerical data types\\\" may prefer to use other methods.\\n\\nThank you for this addition. I think this statement can even be turned into an advantage of LCEN to identify relationships where we can expect simple algebraic relationships to hold, e.g., for physical laws. But your statement in its current form is adequate enough in my opinion.\\n\\nRegarding the interpretability discussion, I agree with you that there is some interpretability of methods like LCEN over neural network-based methods like LassoNet, but I maintain that the feature transformations can significantly impair the interpretability in case more than a handful of transformed features are selected. However, I understand that there is a trade-off to be made between non-cumbersome interpretability and predictive performance.\\n\\nUnfortunately, my biggest remaining concern regarding the theoretical underpinnings of LCEN remains inadequately addressed (Major weakness 2 - technical soundness):\\n\\n>while we agree that theoretical motivations or intuition would improve the paper, we consider that the reformed theoretical background (which features many works showing desirable theoretical properties of the thresholded LASSO, a LASSO-Clip model) in the Introduction section and the rationale behind LCEN added to the Methods section to be sufficient, particularly because we rigorously confirm the performance (accuracy, sparsity, speed) of LCEN relative to other models on a wide variety of experiments for both feature selection and prediction tasks.\\n\\nIf I am not mistaken, the only relevant change regarding the theoretical background is this addition in the introduction section (I do not consider the rationale behind LCEN in the methods section to provide theoretical intuition):\\n\\n>While formal theoretical proofs are beyond the scope of this work due to space limitations, important previous works that proved desirable theoretical properties of the thresholded LASSO (a LASSO-Clip model) include Zhou (2009), Meinshausen & Yu (2009), Zhou (2010), and van de Geer et al. (2011). These works provide a theoretical scaffold to partially justify the high performance of LCEN.\\n\\nThis effectively only references previous related works that do provide theoretical contributions using e.g., the restricted isometry property or mutual incoherence, without establishing similar results for LCEN, i.e., conditions for support recovery or estimation consistency. If no formal proofs are provided (which can be expected to some degree for such a contribution at this conference) a convincing and explicit argument as to why previous results carry over to LCEN should be included for a strong contribution.\"}", "{\"title\": \"Author Response to the Review of Reviewer knkT Part 5/7\", \"comment\": \"> Together, these points would require the inclusion of new theoretical results or at least a detailed discussion of existing results, a more extensive contextualization in the related literature of sparse estimation with thresholding and refitting, and more convincing, less simplistic experiments to corroborate their claims, e.g., typical high-dimensional sparse applications.\", \"author_response\": \"we agree with these limitations, but we had included in the Discussion section that LCEN \\\"can model only the functions present in the expansion of dataset features\\\". We do not consider this to be a significant limitation, as highlighted by the great performance of LCEN in multiple tasks containing real data.\", \"regarding_the_wording_on_line_367\": \"our intention was to say \\\"An end user could prioritize creating very sparse models, even at the expense of increasing these models' MSEs. To simulate such a scenario, the LCEN cutoff hyperparameter was increased from the value that minimizes the validation MSE to create sparser models.\\\" We have added this clarification to the main text. An end user that seeks only the minimization of error could ignore these tests with artificially higher cutoffs.\\n\\n> 2. The paper claims to propose a non-linear method, but this is somewhat misleading. The final result is a model that is linear in its parameters, but non-linear in its features due to the deterministic feature expansion. Their expressivity naturally limits the expressivity of LCEN, so that it might not be well suited for many real-world applications, where no specific polynomial term should be selected as in the physical law experiments. Highlighting better how LCEN is particularly suited for certain kinds of applications where you would expect simple algebraic expressions to occur in the data-generating process could improve the clarity of the exposition.\"}", "{\"title\": \"Reply to the Author Response\", \"comment\": \"Thank you for your response and clarification. While I understand your perspective, I still believe that incorporating symbolic learning into the experiment comparison would enhance the paper by providing insights into nonlinear term explanations. Alternatively, including feature selection for the black-box model could demonstrate the generality and adaptability of your pipeline. Removing the black-box model comparison altogether might also help to better highlight the unique strengths of your approach.\\nThese are just my thoughts, and I appreciate the effort you\\u2019ve put into this work. I will maintain my current rating based on these considerations. Thank you again for your time and contributions.\"}", "{\"title\": \"Another revised manuscript PDF has been submitted\", \"comment\": \"Thank you very much for the continued engagement and comments. We have submitted a revised version of our manuscript. Changes from the original version are marked in red. The biggest changes were adding symbolic regression models to the experimental comparisons and elaborating on the limitations of LCEN in the Discussion section.\\n\\nIn the near future, we would still like to make the changes mentioned in the above comment to further improve this work.\"}", "{\"title\": \"Author Response to the Review of Reviewer knkT Part 1/7\", \"comment\": \"Thank you for your review. Our response to your comments are as follows:\", \"strengths\": \"> This paper demonstrates several notable strengths.\\n\\n> Originality: While the individual components (LASSO/EN, feature transformations, hard thresholding) are well-established techniques, their combination and sequential application in that order represents an original contribution that helps bridge the gap between classical linear models and more powerful black-box approaches.\\n\\n> Technical details and Experiments: The paper shows strengths in its detailed algorithm description and sound experimental evaluation, as well as an appropriate choice of competitor models for the experiments. The experimental methodology is well-structured, progressing from artificial data (to validate basic properties) to known physical laws (to verify feature selection properties) and real-world applications. The presentation of results is very thorough and detailed.\\n\\n> Significance: Further, the work addresses the important challenge of creating interpretable models with non-linear effects that maintain competitive performance, with the demonstrated ability to recover known physical laws while being computationally cheap.\\n\\n> Reproducibility: The code appears to be well-written and correct, although I did not run it.\", \"author_response\": \"your second hypothesis is correct -- the use of a hard thresholding step improves the feature selection capabilities of LCEN, which usually also improves its test-set accuracy. This hard-thresholding step also reduces the runtime. We have explicitly added this motivation to the Methods section of the paper.\", \"weaknesses\": \"Major weaknesses\\n> This first part contains the most important concerns that I would expect the paper to answer in order to be relevant for the general ML audience. These major weaknesses, however, might require well-justified additional explanations and/or notable additional work on both contextualizing/establishing theoretical results and a more varied experimental evaluation.\\n\\n> 1. Motivation/Soundness: While the method itself is well-described, the only motivation for its construction is given by the ablation study comparing it against other possible approaches combining the feature-expanded lasso and elastic net sequentially with or without hard thresholding. A proper motivation of the method should give some hints regarding:\"}", "{\"title\": \"Author Response to the Review of Reviewer Ko84 Part 2/2\", \"comment\": \"> Also, this definition of the sparsity might also be hard to incorporate with your definition of interpretability since the complexity introduced by these transformations might even rival that of black-box models, making the model difficult for humans to interpret, despite its sparse structure. I think reviewer knkT also points out that interactive terms are harder to understand within the model. Feature transformations and combinations, such as interaction terms or nonlinear transformations, may improve accuracy but inherently reduce the model's transparency. Without a clear mechanism for explaining these transformations, the model's interpretability could rival that of black-box models. The relationship among sparsity, accuracy (MSE), and interpretability might need to be enhanced within the paper or clearer stated.\", \"author_response\": \"thank you for your comments and for your response. We hope that we were able to answer your concerns about this work, and please continue to engage if that has not been the case.\"}", "{\"title\": \"Author Response to the Comment of Reviewer knkT Part 2/2\", \"comment\": \"> The definition of interpretability as \\\"providing f() in a form readily understandable to humans\\\" is reasonable, but I'm not fully convinced that models with numerous transformed features (e.g., higher-order interactions, complex transformations) truly meet this standard of being \\\"readily understandable.\\\"\", \"author_response\": \"thank you for your comments and your engagement. We hope that we were able to address your concerns satisfactorily, and please let us know if that has not been the case.\"}", "{\"comment\": \"Thank you for your efforts in addressing the concerns and enhancing the readability of the manuscript. Considering the additional revisions made, I am raising my score to 5. While I still share some of the concerns raised by reviewer 2thw, my limited familiarity with the related literature makes it challenging to form a definitive judgment. However, I believe the authors' significant efforts deserve recognition.\"}", "{\"title\": \"Author Response to the Review of Reviewer Ko84\", \"comment\": \"Thank you for your review. Our response to your comments are as follows:\", \"strengths\": \"> The method outperforms the previous paper ALVEN in showing better performance in feature selection and accuracy.\", \"author_response\": \"the use of Shapley values for feature selection is indeed an interesting method, and one that could be applied to any model. However, we do not agree that the experiments are unfair; the other methods certainly perform feature selection as evidenced by our experiments and general knowledge. As highlighted by works cited in the Introduction and others, stepwise regression is a poor technique for feature selection. Furthermore, stepwise regression is not done in LCEN or for any of the other models used in this work.\", \"weaknesses\": \"> [Poor Organized Introduction and Irrelevant works] Starting from the first paragraph of the paper, too many concepts are shown all at once: \\\"statistical models\\\", \\\"causal hypothesis,\\\" etc. without definitions and without clearly linking them together. There is also weak transformation going from the \\\"statistical models\\\" to \\\"many model architectures\\\".\"}", "{\"title\": \"Author Response to the Comment of Reviewer 2thw\", \"comment\": \"> Thank you very much for your responses, and I appreciate your contribution to the literature on high-dimensional statistics. However, after carefully reviewing your comments, I regret that my concerns remain unresolved. A more detailed and comprehensive comparison of the proposed method with the existing literature, both empirically and theoretically, is essential. I strongly encourage you to thoroughly address my comments and refine the work before considering submission to another journal or conference, should ICLR decide to decline it.\", \"author_response\": \"thank you for your comment. Would it be possible to elaborate further on your concerns, please? On the theoretical side, although we have not provided formal theoretical proofs of the properties of LCEN, we have added to the Introduction multiple works that have proved desirable properties of the thresholded LASSO (that is, a LASSO-Clip model), which provide a background to justify the strong performance of LCEN. One of these works (van de Geer et al., 2011) shows that the thresholded LASSO and adaptive LASSO lead to effectively the same results, with thresholded LASSO having a marginal advantage, so our use of hard-thresholding as opposed to adaptive steps should not be a disadvantage.\\n\\nOn the experimental side, we have added comparisons to SCAD and MCP, and have mentioned that we plan to include comparisons with the adaptive EN as well. We have also significantly reformed the feature selection experiments to include more experiments, including experiments containing \\\"high-dimensional data where the number of features is over 1000\\\" as requested, and have included relevant statistics on the results. We can also include specifically the true positive rate and false positive rate (as requested) and also plot ROC curves, although we note that the Precision-Recall curves include the true positive rate. Finally, we mentioned that we would like to include benchmark data from the UCI dataset repository, and we are open to using specific datasets suggested by you.\"}", "{\"title\": \"Revised manuscript PDF has been submitted\", \"comment\": \"Once again, we would like to thank the reviewers for their comments. We have submitted a revised version of our manuscript. Changes from the previous version are marked in red.\\n\\nIn addition to the changes that have already been made, we would like to extend the feature selection experiments with artificial data to include scenarios with noise. We also want to include more feature selection datasets that are used as benchmarks for this task. Finally, we also plan on expanding the multicollinear data experiments to include results for other models.\"}", "{\"title\": \"Author Response to the Review of Reviewer Ko84 Part 1/2\", \"comment\": \"> Thank you for the response and modification on the paper. I still have some questions and concerns about your paper:\", \"author_response\": \"in our response, we equated the *definition* of interpretability used in our work and that used by SHAP or LIME. Certainly no one questions that SHAP and LIME provide interpretability to models, and the way they do so is by providing some functional form $y = \\\\sum_{i}{k_i X_i}$ for the inputs X and outputs y. The end-result (not the methodology, but the end-result) of LCEN (and many other interpretable methods) is the same, providing a functional form relating the inputs X to the outputs y. As you correctly mentioned, SHAP and LIME are post-hoc methods, which select features *a posteriori* based on some minimization of error, whereas LCEN does feature selection during the model training. As mentioned in the above answer, this is exactly the difference between wrapper methods and embedded methods, and it is incorrect to say only one of these methodologies is valid or functional.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your thorough and detailed response to my review, as well as the extensive revisions to your submission. The quality of the paper has improved significantly through these changes. I particularly appreciate the following:\\n\\n1. The more comprehensive experimental evaluation, especially a) including high-dimensional settings, b) adding comparisons with non-convex methods (SCAD, MCP), c) including a debiasing approach, d) the new artificial data experiments demonstrating LCEN's performance across varying n/p ratios\\n2. The clearer explanations of design choices and hyperparameter selection\\n3. The expanded related work section and better contextualization within the literature\\n\\nHowever, I maintain several concerns that I don't feel were fully addressed:\\n\\n>Regarding the coefficient magnitudes and the clip step, note that the clip step is executed on the scaled coefficients, so these coefficients are supposed to have similar magnitudes (if they are from true features) independently of the data's values, and the coefficients from false features tend to have lower magnitudes, which explains why the clip step is effective.\\n\\nWhile you note that \\\"the clip step is executed on the scaled coefficients,\\\" scaling doesn't address the fundamental issue of relative coefficient magnitudes. The smallest non-zero coefficient could still be arbitrarily close to or far from other coefficients, which affects the effectiveness of hard thresholding.\\n\\n>We agree that formal theoretical foundations would improve this work, but we believe that such results are beyond the scope of this work, as we preferred to validate the performance of LCEN empirically with a variety of scientific and non-scientific tasks.\\n\\nWhile I appreciate the extensive empirical validation, I maintain that a strong methodological contribution should provide theoretical motivation or at least intuition beyond empirical performance on selected datasets. This is particularly important given LCEN's relationship to well-studied methods like thresholded LASSO.\\n\\n>we agree with these limitations, but we had included in the Discussion section that LCEN \\\"can model only the functions present in the expansion of dataset features\\\". We do not consider this to be a significant limitation, as highlighted by the great \\nperformance of LCEN in multiple tasks containing real data.\\n\\nWhile you acknowledge that LCEN \\\"can model only the functions present in the expansion of dataset features,\\\" I believe this limitation deserves more prominence. Most modern ML applications deal with complex data types (images, text, sequencing data) instead of \\u201csmall n, small p\\u201d tabular data like Boston Housing, where deterministic feature expansions may be insufficient. Though LCEN need not address all data types, this limitation should be more clearly stated.\\n\\n>Based on this definition, LCEN is indeed perfectly interpretable even if it may be challenging to interpret an LCEN model with multiple dozens or hundred of coefficients.\\n\\nThe definition of interpretability as \\\"providing f() in a form readily understandable to humans\\\" is reasonable, but I'm not fully convinced that models with numerous transformed features (e.g., higher-order interactions, complex transformations) truly meet this standard of being \\\"readily understandable.\\\"\\n\\n----\\n\\nGiven the substantial improvements to the paper and your detailed engagement with the review comments, I am raising my score to 5 for now. The method represents an interesting contribution to interpretable ML, particularly for specific domains where algebraic relationships are expected. However, I believe the remaining concerns should be addressed to recommend acceptance.\"}", "{\"title\": \"Author Response to the Review of Reviewer knkT Part 4/7\", \"comment\": \"> In the comparison against the ALVEN algorithm - which uses a similar feature expansion scheme, but employs F-tests for feature selection - 30 training data points are simulated from a fourth-degree univariate polynomial with additive noise. Although this setting was reproduced from the ALVEN paper, superiority on this simple example is not convincing to establish superiority of LCEN over ALVEN in a more general sense.\", \"author_response\": \"we have added more relevant citations to the introduction, both to add other relevant methods and to highlight these important works showing that the thresholded LASSO has desirable theoretical properties. As mentioned above, we also added comparisons with an LCEN followed by OLS model to the ablation studies. This debiased model estimates the coefficients more accurately based on the \\\"Relativistic energy\\\" experiment, but it leads to a worse test set MSE on the experiments with empirical datasets from processes with unknown physical laws.\"}", "{\"comment\": \"> [Algorithm or Workflow] The paper title says that it is a novel feature selection algorithm for the machine learning, but in the paper, it is emphasizing that LCEN prediction is showing the lowest MSE, which seems to be the metric for a good predictor. Therefore, I am wondering whether the author is considering LCEN as a feature selection workflow, or a machine learning algorithm? If it is a feature selection workflow, can it be applied to other ML models after the feature is extracted? If it is an ML algorithm for regression and real value prediction, the introduction section seems to be inconsistent since it is focusing on the feature selection techniques comparison.\", \"author_response\": \"we consider LCEN to be both, and that LCEN is able to make accurate predictions precisely because it possesses good feature selection capabilities. Our goal with the experiments was first to highlight LCEN's feature selection capabilities (artificial datasets and empirical datasets from processes with known physical laws), then to highlight its predictive capabilities (empirical datasets from processes with unknown physical laws). We have revamped the experiments with artificial data to better showcase LCEN's feature selection capabilities.\\n\\nAlthough not shown in this paper, we believe that LCEN indeed can be applied to other ML models after feature extraction, and this is a direction we want to pursue in future work. This is emphasized by LCEN's great results on the new \\\"Artificial Linear\\\" experiment for feature selection. LCEN consistently outperformed the other models tested (LASSO, EN, FS-GAMs, SCAD, MCP) on this task based on the Matthews Correlation Coefficient (MCC). LCEN achieves perfect feature selection in the scenarios with more samples than features ($N > P$). LCEN surpassed the other methods in the other scenarios in terms of absolute MCC by 19.8\\\\% on average when $N = P$ and by 8.2\\\\% on average when $N < P$.\", \"title\": \"Author Response to the Review of Reviewer Ko84 Part 3/3\"}", "{\"title\": \"Author Response to the Review of Reviewer GMaV Part 1/2\", \"comment\": \"Thank you for your review. Our response to your comments are as follows:\", \"strengths\": \"> The method proposed by the author is simple.\\n> The empirical results look good in many cases.\", \"author_response\": \"you are correct; there was a typo and the standard deviation is supposed to equal 1. This has been fixed.\", \"weaknesses\": \"> The scope of this paper's contribution is not clearly and correctly defined. The argument the authors made in the abstract \\\"The most commonly used interpretable architectures, such as LASSO or elastic net, are limited to linear predictions and have poor feature selection capabilities\\\" is overly strong. There exists a plethora of interpretable architectures, such as Concept Bottleneck Models [1], Regression/Decision Trees [2], GAMs [3] and so on. These algorithms (as well as their improved versions) can all perform feature selection and non-linear predictions. Furthermore, the authors later acknowledge on lines 44-45 that \\\"nonlinear models may also be made sparse, and even interpretable\\\" and even include a comparison with GAM in their experiments. I believe the statement in the abstract needs to be edited for clarity and correctness.\", \"questions\": \"> Line 171, how can data be set such that its standard deviation = 0? It seems to be a typo since on line 141-142 the standard deviation is set to 1, instead of 0.\"}", "{\"summary\": \"This paper presents LASSO-Clip-EN(LCEN), a machine learning algorithm designed to generate interpretable, non-linear models. The algorithm begins with LASSO to perform feature selection, retaining only features with coefficients that exceed a defined threshold in the selected model. These filtered features are then used to fit an ElasticNet model, which undergoes a similar thresholding process to retain only significant features. The final model, consisting solely of the ElasticNet, is then used for prediction. The algorithm achieves good RMSE on artificial and real-world datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The method proposed by the author is simple.\", \"The empirical results look good in many cases.\"], \"weaknesses\": \"- The scope of this paper's contribution is not clearly and correctly defined. The argument the authors made in the abstract \\\"The most commonly used interpretable architectures, such as LASSO or elastic net, are limited to linear predictions and have poor feature selection capabilities\\\" is overly strong. There exists a plethora of interpretable architectures, such as Concept Bottleneck Models [1], Regression/Decision Trees [2], GAMs [3] and so on. These algorithms (as well as their improved versions) can all perform feature selection and non-linear predictions. Furthermore, the authors later acknowledge on lines 44-45 that \\\"nonlinear models may also be made sparse, and even interpretable\\\" and even include a comparison with GAM in their experiments. I believe the statement in the abstract needs to be edited for clarity and correctness.\\n\\n- Writing of this paper needs to be significantly. Section 2 covers the entire method in just two paragraphs: one paragraph describes the hyperparameters, and the other, spanning 30 lines, explains the entire algorithm. I believe separating these paragraphs into smaller, more consistent pieces could improve the readability. Additionally, the Methods section lacks any discussion on the rationale behind the design choices.\\n\\n- While the algorithm provided is simple, using LASSO for feature selection itself is not new, as the author has noted in the introduction line 68-98. The method proposed in this work, which is using features selected from LASSO for ElasticNet thus lacks significant originality.\\n\\n[1] Koh, Pang Wei, et al. \\\"Concept bottleneck models.\\\" International conference on machine learning. PMLR, 2020.\\n[2] Loh, Wei\\u2010Yin. \\\"Classification and regression trees.\\\" Wiley interdisciplinary reviews: data mining and knowledge discovery 1.1 (2011): 14-23.\\n[3] Trevor Hastie, Robert Tibshirani. \\\"Generalized additive models.\\\" Statist. Sci. 1(3): 297-310 (August, 1986).\", \"questions\": [\"Line 171, how can data be set such that its standard deviation = 0? It seems to be a typo since on line 141-142 the standard deviation is set to 1, instead of 0.\", \"Given the current formulation of the algorithm, it makes me wonder why the author has chosen to perform the feature selection with LASSO and EN, but not other algorithms such as Ridge Regression and many others. It could easily lead to a family of algorithms that combines the feature selection for multiple algorithms. More explanation and experimental results on the effect of switching underlying algorithm components could be beneficial to this paper.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes a feature selection method based on the Lasso, feature expansions and hard-thresholding steps. All reviewers suggested rejection and agreed on the main problems with the paper. As a machine learning work, the paper is significantly limited in several respects: experimental comparison, theoretical analysis and most importantly novelty, as the paper is essentially rehashing existing ideas. The authors need to understand properly the literature in statistics and machine learning in these topics. No amount of discussion on the authors' side is going to compensate for these deficiencies.\", \"additional_comments_on_reviewer_discussion\": \"N/A\"}", "{\"summary\": \"This paper introduces the LASSO-Clip-EN (LCEN) algorithm, which combines lasso with feature expansions (including quadratic, cubic, and interaction terms), elastic net, and hard-thresholding steps. The method maintains interpretability due to its foundation in lasso-type techniques and its emphasis on sparse feature selection.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"Both high-dimensional statistics and interpretable machine learning are fundamental tasks.\", \"weaknesses\": \"1. The proposed method offers minimal novelty. The clip step is essentially a hard-thresholding step. To remove irrelevant variables, adaptive lasso (Zou 2006) or adaptive elastic (Zou and Zhang 2008) net could be used instead, as they function as two-step methods that impose higher weights in the second step, penalizing features with smaller coefficients identified in the first step. These approaches are more flexible and robust than a straightforward hard-thresholding step. The combination of lasso with various thresholding approaches is not a surprising idea.\\n2. The authors should discuss and compare their method with existing techniques like SCAD and MCP, which specifically address the lasso's tendency to over-select features. Added features could be more effectively managed using spline-based methods (natural or B splines) or sparse generalized additive models (Ravikumar et al 2009, Haris et al, 2022), which are better suited for high-dimensional settings. The literature review in this paper is notably insufficient. Both theoretical and numerical comparisons between the proposed method and adaptive elastic net with the splines are necessary. \\n3. High-dimensional statistics papers typically include solid theoretical foundations due to the simplicity of the models. This paper lacks any theoretical analysis. The author should provide convergence guarantees, consistency results, or bounds on estimation error for the proposed method.\\n4. The experimental validation is limited. The authors should conduct extensive experiments to assess performance in feature selection. Specifically, besides MSE and MAD, the authors should also consider false positive and false negative rates for feature selection. It is not enough to only show the number of features selected. Comprehensive analysis using extensive benchmark datasets is also necessary to demonstrate the efficacy of the proposed method. The authors can use the benchmark data from UCI with a focus on high-dimensional data where the number of features is over 1000.\", \"questions\": \"No additional questions. See the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces the LCEN (LASSO-Clip-EN) algorithm, a novel method for feature selection in nonlinear, interpretable machine learning models. LCEN employs two cross-validation steps: the first determines the complexity of the nonlinear terms, and the second defines the complexity of the final linear model. The authors show that LCEN improves both feature sparsity and accuracy compared to traditional approaches.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The method outperforms the previous paper ALVEN in showing better performance in feature selection and accuracy.\", \"weaknesses\": \"*[Poor Organized Introduction and Irrelevant works]* Starting from the first paragraph of the paper, too many concepts are shown all at once: \\\"statistical models\\\", \\\"causal hypothesis,\\\" etc. without definitions and without clearly linking them together. There is also weak transformation going from the \\\"statistical models\\\" to \\\"many model architectures\\\". Also, each of the paragraph in the introduction sections is missing a conclusion sentence to give users an idea on what they would like to give out. I am not sure why some of the paragraph are explained within this section: for example, the L_1/group Lasso regularization for neural network and the following methods seems to be a regularization for the complexity of the neural network to me (or in other words, keep the robustness of the neural network by reducing parameters) instead of really doing a feature selection to give out the useful features for the model. I am confused about why those methods should be mentioned here. This also connects to my question about the definition of sparsity here.\\n\\n*[Missing symbolic regression work reviews]* In the paper, the author mentioned about doing feature expansions to the original features and see how the performance would go by expanding and selecting the most useful features. The key idea for the methodology consists of data expansion and interaction term generations for the model. Therefore, I think the symbolic learning reviews should also be mentioned within the introduction and compared/discussed within the experimental sections since both methods tend to provide non-linear features/relationship for better prediction of the datasets [1, 2] (I just put two papers here, but there are a whole field working on this).\\n\\n*[Missing the ablation tests for the algorithm]* In the Lasso-Clip-EN, the feature selection can be separated into three steps, and I am not sure how each step would contribute to the model performance, it would be nice to show how each step are contributing to the feature selection process so that the paper would be clearer and more understandable instead of directly explain what the algorithm is.\\n\\n*[Unfair Comparison between the LCEN and other methods]* In the experiment section, the authors compare LCEN with other non-linear methods or linear methods and prove that it surpasses many other methods. I am not sure it is a fair comparison to compare those methods with the feature selected EN algorithm. For the other linear and non-linear methods, they can also do forward/backward feature selections based on their feature importance in the training dataset so as to get a better performance [3], and make the model more robust (The author have also mentioned stepwise regressions within the related work section), but none of them is discussed within the experiments selection for validations. It seems to unfair to compare a feature selection workflow to the other single machine learning model.\\n\\n[1] Xu, Kai, et al. \\\"A bayesian-symbolic approach to reasoning and learning in intuitive physics.\\\" Advances in neural information processing systems 34 (2021): 2478-2490.\\n[2] Augusto, Douglas Adriano, and Helio JC Barbosa. \\\"Symbolic regression via genetic programming.\\\" Proceedings. Vol. 1. Sixth Brazilian symposium on neural networks. IEEE, 2000.\\n[3] Marc\\u00edlio, Wilson E., and Danilo M. Eler. \\\"From explanations to feature selection: assessing SHAP values as feature selection mechanism.\\\" 2020 33rd SIBGRAPI conference on Graphics, Patterns and Images (SIBGRAPI). Ieee, 2020.\", \"questions\": \"*[The definition of Sparsity]* In the second paragraph of the introduction section, the author mentioned that: \\\"It should be noted that nonlinear models may also be made sparse, and even interpretable, as described later in this section and the rest of this work.\\\" The definition of sparse here seems not to be clear to me. For the decision trees, it might be the number of leaves, for the additive models, it might be the number of non-zero coefficient terms, and for the non linear models, it is not clear to me what the definition of sparsity here means. Are you referring to the number of non-zero terms or the number of features that is used to construct the model? Different definitions might lead to different conclusions to your model design and experiment analysis. Also, in Table 4, for LCEN, are the feature number be the number of non-zero terms after feature transformation or the number of the original features? In other words, are $x$ and $\\\\log(x)$ considered as two variables or one?\\n\\n*[The noise level addition]* In the paper, the noise level $\\\\epsilon$ is not clearly defined starting from the second step of the relativistic energy. I am not sure what the noise level is really defined for 20%, what are the real difference for the feature values be when the noise level change from 20% to 30%? Is this noise similar to the experimental error distribution?\\n\\n*[The definition of interpretability]* In the paper, the author mentioned that the sparsity of the model is leading to a better explanation/interpretability to the model. Here, I would like to asked for a clarification the interpretability of the model since, from my perspective, there are also interactive terms within the model which reduces the explanability of the model, I am worried about when the final equation is given, can it really explain why the equation term should be $x^2 \\\\log x$ instead of $x \\\\log x$? Is there any verification step for those discovered variables? On the other hand, in the LCEN, you will have different cutoff for the feature selection as you have shown in Table 4, how can you determine which cuttoff is the one that the real physics is?\\n\\n*[Algorithm or Workflow]* The paper title says that it is a novel feature selection algorithm for the machine learning, but in the paper, it is emphasizing that LCEN prediction is showing the lowest MSE, which seems to be the metric for a good predictor. Therefore, I am wondering whether the author is considering LCEN as a feature selection workflow, or a machine learning algorithm? If it is a feature selection workflow, can it be applied to other ML models after the feature is extracted? If it is an ML algorithm for regression and real value prediction, the introduction section seems to be inconsistent since it is focusing on the feature selection techniques comparison.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response to the Review of Reviewer knkT Part 7/7\", \"comment\": \"> (Q) The term \\u201chigh hyperparameter variance\\u201d is not explained and its meaning is not clear to me. Does it mean \\u201cuncertainty about the correct hyperparameter values\\u201d?\", \"author_response\": \"we have clarified in the Methods section that setting $lag > 0$ adds $X$ and $y$ features from previous time steps.\"}", "{\"title\": \"Author Response to the Review of Reviewer 2thw Part 2/2\", \"comment\": \"> 4. The experimental validation is limited. The authors should conduct extensive experiments to assess performance in feature selection. Specifically, besides MSE and MAD, the authors should also consider false positive and false negative rates for feature selection. It is not enough to only show the number of features selected. Comprehensive analysis using extensive benchmark datasets is also necessary to demonstrate the efficacy of the proposed method. The authors can use the benchmark data from UCI with a focus on high-dimensional data where the number of features is over 1000.\", \"author_response\": \"we have reformed the experiments with artificial data to give further insight on the performance of LCEN relative to other models. These experiments now include metrics on the feature selection performance of multiple methods (LASSO, EN, FS-GAMs, SCAD, MCP, and LCEN) for multiple datasets containing [100, 500, 1000 samples] x [100, 500, 1000 true features] x [0% noise, with higher values coming soon] x [25%, 50%, 75%, 100% additional false features]. LCEN consistently outperformed the other models tested (LASSO, EN, FS-GAMs, SCAD, MCP) on this task based on the Matthews Correlation Coefficient (MCC). LCEN achieves perfect feature selection in the scenarios with more samples than features ($N > P$). LCEN surpassed the other methods in the other scenarios in terms of absolute MCC by 19.8\\\\% on average when $N = P$ and by 8.2\\\\% on average when $N < P$. We also plan on including comparisons using the UCI benchmark data.\\n\\nOn a different matter, we would like to highlight that the experiments in Section 3.2 feature two types of empirical data. The first type included data from process whose physical laws are known. In Table 3, we highlight that LCEN selects \\\"only correct features\\\" for all of these datasets, indicating a true positive rate of 100% and a false positive rate of 0%. The second type included data from process whose physical laws are unknown. In this scenario, it is impossible to calculate these rates, as the true and false features are not known in the first place. We have shown the number of features selected (particularly in the \\u201cDiesel Freezing Point\\u201d dataset, which contains more features than samples) to highlight that LCEN can build sparse yet accurate models. We have also amended the introduction to highlight that we define \\\"sparsity\\\" as \\\"a model that uses few input features, particularly relative to the total number of features available\\\".\"}", "{\"title\": \"Reply to Author Response\", \"comment\": \"Thank you for the effort in incorporating the experimental sections on symbolic regression. The inclusion of the symbolic learning concept has significantly enhanced the quality and depth of the paper.\\n\\nI understand that, given the constraints on time for discussions, conducting a comprehensive analysis and comparison of symbolic learning methods can be challenging. However, I encourage the authors to further expand on the comparative explanations between the feature combinations generated or selected by LCEN and those derived through symbolic learning. Such an additional comparison would provide clearer insights into how interpretable and meaningful the explanations are relative to existing methods.\\n\\nAdditionally, based on the results from gplearn, the performance of its model appears to be significantly lower than that of the LCEN pipeline. A more detailed discussion regarding potential reasons for this disparity, such as differences in the number of transformations or other methodological factors, would enhance the paper. Rather than simply adding experiments, including this discussion would provide valuable context and improve the overall narrative.\\n\\nIn recognition of the substantial efforts demonstrated in the paper, I will increase my evaluation score from 3 to 5. I hope the authors will place greater emphasis on discussing the interpretability of the model, enabling readers to better understand why LCEN is more likely to be preferred over alternative approaches.\"}", "{\"summary\": \"This paper introduces LASSO-Clip-EN (LCEN), a supervised learning algorithm with built-in feature selection and non-linear feature effects through a deterministic feature expansion scheme including polynomial transformations, interaction effects, and basic non-linear transformations such as log or inverse and combinations of the aforementioned. The final estimator is obtained as a hard-thresholded elastic net estimator on an expanded feature set. The hyperparameters for the feature expansion scheme are first determined using cross-validation on a lasso model with additional hard thresholding, after which the elastic net hyperparameters are tuned using the feature expansion scheme determined in the lasso step. This results in a sequential algorithm that produces sparse linear coefficients of deterministic non-linear feature transformations, with the complexity of the transformations determined in a first step.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This paper demonstrates several notable strengths.\", \"**Originality**: While the individual components (LASSO/EN, feature transformations, hard thresholding) are well-established techniques, their combination and sequential application in that order represents an original contribution that helps bridge the gap between classical linear models and more powerful black-box approaches.\", \"**Technical details and Experiments** The paper shows strengths in its detailed algorithm description and sound experimental evaluation, as well as an appropriate choice of competitor models for the experiments. The experimental methodology is well-structured, progressing from artificial data (to validate basic properties) to known physical laws (to verify feature selection properties) and real-world applications. The presentation of results is very thorough and detailed.\", \"**Significance**: Further, the work addresses the important challenge of creating interpretable models with non-linear effects that maintain competitive performance, with the demonstrated ability to recover known physical laws while being computationally cheap.\", \"**Reproducibility**: The code appears to be well-written and correct, although I did not run it.\"], \"weaknesses\": \"## Major weaknesses\\n\\nThis first part contains the most important concerns that I would expect the paper to answer in order to be relevant for the general ML audience. These major weaknesses, however, might require well-justified additional explanations and/or notable additional work on both contextualizing/establishing theoretical results and a more varied experimental evaluation. \\n\\n1. **Motivation/Soundness**: While the method itself is well-described, the only motivation for its construction is given by the ablation study comparing it against other possible approaches combining the feature-expanded lasso and elastic net sequentially with or without hard thresholding. A proper motivation of the method should give some hints regarding:\\n * **A**: Why does LCEN use a sequential procedure for determining the complexity of the feature transformations and the elastic net hyperparameters separately? Why not perform one big hyperparameter search for both the degree/lag hyperparameters of the transformations and the EN alpha and l1_ratio of EN (and then hard threshold the selected coefficients once)?\\n * **B**: Why use a hard thresholding step at all after having obtained already sparse estimates from lasso or elastic net? If the aforementioned support recovery conditions for lasso are met and the lasso successfully identifies the true features, then this additional post-thresholding/clip step might remove relevant features with non-zero effects and thus harm both feature selection and likely predictive performance. If the conditions are violated, the lasso is known to overselect irrelevant features, which can be alleviated by such a hard-thresholding operation of the lasso estimates (Zhou, 2010, Meinshausen and Yu, 2009). If this is the case this motivation should be mentioned more explicitly in the paper.\\n * **C**: Why does the final step also include a hard-thresholding step? Typically, hard-thresholding after the lasso (or equally elastic net) improves support recovery but slightly harms estimation error. Hence, the literature revolving around sequential lasso estimation with hard thresholding (Zhou, 2009;2010;2023, van de Geer et al., 2011) uses this approach only in the first step to identify the relevant features, and in the final step fits an OLS model on the reduced feature set to counteract the shrinkage bias introduced by L1 and L2 regularization in the coefficients (van de Geer et al., 2011, Belloni and Chernozhukov, 2013). Such a debiasing step at the end is missing from LCEN, whose goal decidedly is not only to select the correct features but also produce the correct coefficients.\\n * **D**: Why does the first step use a different regularizer (lasso) than the second step (elastic net)? I suspect this will produce some theoretical inaccuracies, as parts of the hyperparameters were tuned toward another objective than the others.\\n\\n2. **Technical soundness**: The paper does not provide any theoretical results motivating or justifying the method, nor does it establish results under which conditions the algorithm can perform support recovery, i.e., select the true (transformed) features involved in the data-generating process with high probability, or achieve estimation consistency. In the sparse estimation literature, various classical conditions such as restricted isometry/eigenvalue/nullspace, irrepresentability, or mutual incoherence conditions are discussed to this end (e.g., van de Geer and B\\u00fchlmann, 2009). Moreover, the relative coefficient magnitudes, in particular the smallest non-zero coefficient, are important quantities in these analyses (e.g., Zhou, 2010). This is important for the thresholding/clipping step, whose success depends strongly on the relative magnitude of the smallest non-zero coefficients. The relationship of LCEN\\u2019s ability for support recovery should be related to these concepts, at least in the form of a discussion of their applicability or what changes under LCEN instead of (multi-stage) thresholded lasso/EN (formally would be better). \\n\\n I think for LCEN to be properly presented and analyzed for the broader machine learning audience, answering questions roughly similar to the given references is expected for an extension/variation of the multi-stage thresholded lasso (Zhao, 2009). Under which conditions can LCEN select the true transformed features? Under which conditions is the estimator consistent? How does the algorithm\\u2019s runtime and complexity scale with increasing $n$ and $p$ and \\u201cdegree\\u201d, and at which point does it become infeasible on a CPU? How does it perform on varied tasks with different amounts of sparsity and ratios of samples to features (in particular, experiments in a high-dimensional setting are missing where $p>>n$ - however these are usually most important sparse regression)\\n\\n3. **Numerical experiments**: While the experiments are numerous and well executed, they are insufficient to show the general effectiveness of the method as well as to corroborate the investigated claims. The first two experiments, aimed at showing the robustness of LCEN against increasing noise levels and multicollinearity, are insufficient to convince the general reader since they are overly simplistic toy examples using a simple additive data-generating process with a handful of variables at best. The relativistic energy example also includes only two coefficients to be estimated. In the comparison against the ALVEN algorithm - which uses a similar feature expansion scheme, but employs F-tests for feature selection - 30 training data points are simulated from a fourth-degree univariate polynomial with additive noise. Although this setting was reproduced from the ALVEN paper, superiority on this simple example is not convincing to establish superiority of LCEN over ALVEN in a more general sense. The three subsequent experiments on recovering known physical laws from data involve data sets with ($n=293, p=2$), ($n=6, p=1$), ($n=8, p=1$) observations and (original, unexpanded) features, respectively. For the classical ML datasets, Diesel Freezing Point data has $n=395$ samples and $p=401$ features, Abalone has $n=4177$ samples and $p=8$ features, Boston Housing has $n=506$ samples and $p=13$ features, Concrete Compressive Strength has $n=1030$ samples and $p=8$ features. Finally, GEFCom 2014 is a time-series prediction task. Together, these set-ups seem inappropriate for a thorough empirical evaluation of the proposed method. For example, all but one method select all $8$ features on the Abalone data set. An adequate experimental evaluation should include datasets with varying properties, for example, {small n, large n}x{small p, large p}x{low ground truth sparsity, high ground truth sparsity}x{low complexity of data-generating process, medium complexity, high complexity}. Moreover, the experimental results should come with reported uncertainties of the performance metrics, particularly in model comparisons (e.g., mean+standard deviation over 10 splits).\\n\\n4. **Related literature/Novelty**: The paper lacks a discussion of previous thresholded-lasso works, and particularly the established theoretical results therein, almost entirely (below references are a starting point). In my opinion, this should be an important part of the related literature section of the LCEN paper, as LCEN is very similar to the thresholded lasso with refitting, but using a deterministically expanded feature set instead of the raw features and using another regularized model, again with hard thresholding, instead of OLS in the second step.\\n\\nTogether, these points would require the inclusion of new theoretical results or at least a detailed discussion of existing results, a more extensive contextualization in the related literature of sparse estimation with thresholding and refitting, and more convincing, less simplistic experiments to corroborate their claims, e.g., typical high-dimensional sparse applications.\\n\\n## Minor weaknesses\\n\\nThe second part on minor weaknesses contains straightforwardly actionable items that can potentially be resolved by answering them satisfactorily in the discussion period and including the relevant parts in the paper\\u2019s main text, or, by running smaller additional experiments.\\n\\n1. The clipping step seems to be an important part of LCEN looking at the experimental results. However, in the method\\u2019s description and the accompanying algorithm, the cutoff value for hard thresholding is simply taken as a given hyperparameter. Only in line 367 in the experimental section is it mentioned that the cutoff was increased \\u201cfrom the validation-optimal value\\u201d. Its choice should be made clearer, since it clearly is not a robust/insensitive hyperparameter, except for cases where the true non-zero coefficients are far away from 0 and the cutoff is near 0. Other works on the thresholded lasso derive concrete values recommended for the cutoff (e.g., Belloni and Chernozhukov, 2013; Zhou, 2023).\\n2. The paper claims to propose a non-linear method, but this is somewhat misleading. The final result is a model that is linear in its parameters, but non-linear in its features due to the deterministic feature expansion. Their expressivity naturally limits the expressivity of LCEN, so that it might not be well suited for many real-world applications, where no specific polynomial term should be selected as in the physical law experiments. Highlighting better how LCEN is particularly suited for certain kinds of applications where you would expect simple algebraic expressions to occur in the data-generating process could improve the clarity of the exposition. Similarly, it should be highlighted that in general use cases, the complex non-linear feature transformations hinder meaningful interpretability. If more than a handful transformed features are selected, including potentially higher-order interaction terms, polynomial expansions and fixed non-linear transforms.\\n3. As touched on in the previous point, the paper also claims to yield an interpretable model. However, the interpretability is strongly impaired through the use of the deterministic feature transformations and interaction effects. If the model is not extremely sparse, any meaningful interpretability is quickly lost in the presence of feature transformations such as $\\\\log(X_1)/X_1^2$ or $X_1^3 \\\\cdot X_2^2$.\\n4. The lengthy and overly detailed description of the results, as well as their repetition in the discussion section, is in large part irrelevant for the general machine learning audience. The results in the main text should be stated concisely, with verbose descriptions of the obtained results and lengthy discussions of these results moved to the appendix.\\n5. It is odd that the focus of the experiments is often on precise parameter estimation of a sparse subset of transformed features, while using only shrinkage-inducing regularizers at each step of LCEN without an explicit debiasing strategy (e.g., through a final OLS fit), as commonly used for hard thresholded lasso (Zhou, 2023, Meinshausen and Yu, 2009, Belloni and Chernozhukov, 2013). \\n\\n## References\\n\\nZou, Hui. \\\"The adaptive lasso and its oracle properties.\\\" Journal of the American statistical association 101.476 (2006): 1418-1429.\\n\\nMeinshausen, Nicolai, and Bin Yu. \\\"Lasso-type recovery of sparse representations for high-dimensional data.\\\" (2009): 246-270. (hard-thresholding lasso is sign-consistent)\\n\\nZhou, Shuheng. \\\"Thresholding procedures for high dimensional variable selection and statistical estimation.\\\" Advances in Neural Information Processing Systems 22 (2009).\\n\\nZhou, Shuheng. \\\"Thresholded Lasso for high dimensional variable selection and statistical estimation.\\\" arXiv preprint arXiv:1002.1583 (2010).\\n\\nZhou, Shuheng. \\\"Thresholded Lasso for high dimensional variable selection.\\\" arXiv preprint arXiv:2309.15355 (2023).\\n\\nVan de Geer, Sara, Peter B\\u00fchlmann, and Shuheng Zhou. \\\"The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso).\\\" (2011): 688-749.\\n\\nvan de Geer, Sara A., and Peter B\\u00fchlmann. \\\"On the conditions used to prove oracle results for the Lasso.\\\" Electronic Journal of Statistics 3 (2009): 1360.\\n\\nBelloni, Alexandre, and Victor Chernozhukov. \\\"Least squares after model selection in high-dimensional sparse models.\\\" (2013): 521-547.\", \"questions\": \"The following list contains both questions (Q) as well as suggestions (S). More questions can be found under \\u201cWeaknesses\\u201d, points 1, A-D.\\n\\n1. (Q) The algorithm does not clearly state what is meant by \\u201cscaled training data\\u201d in line 191. I assume this is supposed to be the scaled expanded data set using the transformation hyperparameters from the lasso step except those transformed features removed by the lasso-clip step? Somewhat confusingly, in the cross-validation part of the EN step, all feature expansions explicitly only happen temporarily and within the loop but not outside of it, so it should be clarified which scaled training data line 191 refers to.\\n2. (Q) The noise is given as a percentage value (of what?) instead of the standard deviation of the simulated additive noise. The meaning of this percentage should either be clearly stated, or the standard deviation used in this setting simply stated instead.\\n3. (Q) The term \\u201chigh hyperparameter variance\\u201d is not explained and its meaning is not clear to me. Does it mean \\u201cuncertainty about the correct hyperparameter values\\u201d?\\n4. (Q) The caption of Table 4 states that the CV-optimal number of features for FS-GAM was 2 out of 401 features, which oddly results in the highest test RMSE of all FS-GAM models at different sparsity levels displayed in Table 4 (8.32 vs minimum 5.09). This suggests something potentially went wrong in the cross-validation procedure.\\n5. (S) Using the Boston Housing dataset has been discouraged in the ML community for a few years due to inherent biases in the dataset. It is generally recommended to replace it with, e.g., the California Housing data.\\n6. (S) The words models and architectures are used interchangeably in the paper. However, methods like lasso or elastic net are usually not referred to as architectures, as this term typically applies to different neural network architectures.\\n7. (S) The paper contains some claims I find too bold or without the necessary context, e.g., that lasso or elastic net \\u201chave poor feature selection\\u201d (support recovery) properties, which is at least misleading. \\n8. (S) The abstract states that the paper introduces an \\u201calgorithm for the creation of nonlinear, interpretable, machine learning models.\\u201d This could be understood as if LCEN can transform existing methods into interpretable, non-linear algorithms. Rather, LCEN is a specific algorithm involving an elastic net-regularized linear model with a tuned deterministic set of transformed features determined in the first step.\\n9. (S) The experiment investigating the effect of multicollinearity on LCEN would benefit from including a direct comparison of the behaviors of plain lasso, to show the stated behavior of selecting only one of the correlated features and how this effect differs from LCEN under different noise levels.\\n10. (S) The term \\u201cy features\\u201d appears first without explanation in the main text. It would help highlighting that this is only relevant for the time-series application and may be disregarded otherwise.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I really appreciate the authors' effort in revising the paragraphs and addressing my concerns. The readability of the paper has improved noticeably, which is commendable. Nevertheless, I find the response to concerns regarding the novelty of the work to be insufficient. This issue has also been shared by other reviewers as well. Therefore, I am maintaining my original score.\"}" ] }
EhSUM1FcJw
ConceptFlow: Unified Framework for Personalized Image Generation
[ "Trong-Vu Hoang", "Quang-Binh Nguyen", "Thanh-Toan Do", "Tam V. Nguyen", "Minh-Triet Tran", "Trung-Nghia Le" ]
Personalized image generation is an appealing area of research within controllable image generation due to its diverse potential applications. Despite notable advancements, generating images based on single or multiple concepts remains challenging. For single-concept generation, it is difficult to strike a balance between identity preservation and prompt alignment, especially in complex prompts. When it comes to multiple concepts, creating images from a single prompt without extra conditions, such as layout boxes or semantic masks, is problematic due to significantly identity loss and concept omission. In this paper, we introduce ConceptFlow, a comprehensive framework designed to tackle these challenges. Specifically, we propose ConceptFlow-S and ConceptFlow-M for single-concept generation and multiple-concept generation, respectively. ConceptFlow-S introduces a KronA-WED adapter, which integrates a Kronecker adapter with weight and embedding decomposition, and employs a disentangled learning approach with a novel attention regularization objective to enhance single-concept generation. On the other hand, ConceptFlow-M leverages models learned from ConceptFlow-S to directly generate multi-concept images without needed of additional conditions, proposing Subject-Adaptive Matching Attention (SAMA) module and layout consistency guidance strategy. Our extensive experiments and user study show that ConceptFlow effectively addresses the aforementioned issues, enabling its application in various real-world scenarios such as advertising and garment try-on.
[ "Personalized Image Generation", "Visual Guidance" ]
Reject
https://openreview.net/pdf?id=EhSUM1FcJw
https://openreview.net/forum?id=EhSUM1FcJw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ySApfX0uMv", "vvfvmlxYtj", "vIyMlLeZ4Y", "vDaqrxM5iy", "tbqeh0gyci", "t7pdcKz36Y", "rX93YoTSKD", "n9K5oOsr09", "lncmwumnrO", "lQ98SpCTKV", "lPKemI6kMo", "dGmvvJyaT9", "aDdeZ0GxuC", "X8qriM0Res", "OH1TQcKYBN", "KHMc6m4RqE", "JwoRxPVcT0", "JQ78myGbYp", "Es8PKarngN", "ENEvHNzEXn", "DDJHqQuu1t", "AMFexO24eE", "9L4jlEv7A0", "4bDbovIHMv", "4Tmpnru9H0", "3mctu9RmVd" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732294989814, 1732295039954, 1732295057886, 1730650109691, 1732504001775, 1734614061952, 1732503971197, 1732765418379, 1732521847915, 1730654009697, 1732504023902, 1737523649026, 1732689287007, 1732728639696, 1732765186767, 1730716085108, 1729760848101, 1732765333101, 1732778558895, 1732586892077, 1732504043612, 1732294955375, 1732294682901, 1732294811827, 1732294861091, 1732294648869 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4581/Authors" ], [ "ICLR.cc/2025/Conference/Submission4581/Authors" ], [ "ICLR.cc/2025/Conference/Submission4581/Authors" ], [ "ICLR.cc/2025/Conference/Submission4581/Reviewer_wm2m" ], [ "ICLR.cc/2025/Conference/Submission4581/Authors" ], [ "ICLR.cc/2025/Conference/Submission4581/Area_Chair_uCnh" ], [ "ICLR.cc/2025/Conference/Submission4581/Authors" ], [ "ICLR.cc/2025/Conference/Submission4581/Authors" ], [ "ICLR.cc/2025/Conference/Submission4581/Reviewer_wm2m" ], [ "ICLR.cc/2025/Conference/Submission4581/Reviewer_jVBs" ], [ "ICLR.cc/2025/Conference/Submission4581/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4581/Reviewer_jVBs" ], [ "ICLR.cc/2025/Conference/Submission4581/Authors" ], [ "ICLR.cc/2025/Conference/Submission4581/Authors" ], [ "ICLR.cc/2025/Conference/Submission4581/Reviewer_2SBt" ], [ "ICLR.cc/2025/Conference/Submission4581/Reviewer_TYid" ], [ "ICLR.cc/2025/Conference/Submission4581/Authors" ], [ "ICLR.cc/2025/Conference/Submission4581/Reviewer_wm2m" ], [ "ICLR.cc/2025/Conference/Submission4581/Reviewer_TYid" ], [ "ICLR.cc/2025/Conference/Submission4581/Authors" ], [ "ICLR.cc/2025/Conference/Submission4581/Authors" ], [ "ICLR.cc/2025/Conference/Submission4581/Authors" ], [ "ICLR.cc/2025/Conference/Submission4581/Authors" ], [ "ICLR.cc/2025/Conference/Submission4581/Authors" ], [ "ICLR.cc/2025/Conference/Submission4581/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response (2/2)\", \"comment\": \"### Q2. Report both CLIP-I and DINO for identity preservation evaluation\\n\\nWe thank the reviewer for the suggestion. We showcase the results of CLIP-I and DINO for identity preservation evaluation in the tables below, where ConceptFlow-S and ConceptFlow-M both show **competitive results** compared to other methods:\\n\\n*Single concepts generation:*\\n\\n| Method | CLIP-I | DINO |\\n|------------------|--------|-------|\\n| DreamBooth | *0.815* | **0.684** |\\n| Custom Diffusion | 0.701 | 0.503 |\\n| DisenBooth | 0.767 | 0.616 |\\n| ED-LoRA | 0.81 | 0.667 |\\n| LoKr | **0.826** | 0.679 |\\n| **ConceptFlow-S** | 0.812 | *0.682* |\\n\\n\\n*Multiple concepts generation:*\\n\\n| Method | CLIP-I | DINO |\\n|------------------|--------|-------|\\n| Mix-Of-Show | 0.63 | 0.436 |\\n| Custom Diffusion | 0.59 | 0.369 |\\n| FreeCustom | 0.574 | 0.36 |\\n| OMG | 0.593 | 0.357 |\\n| **ConceptFlow-M** | **0.637** | **0.454** |\\n\\nHowever, as mentioned by Heiz at el.[1], the CLIP-I metric is not constructed to distinguish between different subjects that could have highly similar text descriptions (e.g. two different yellow clocks). Therefore, **it might not accurately reflect the identity preservation performance of the model**.\\n\\n[1] DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation, CVPR 2023\\n\\n### Q3. About the extra conditions like boxes and masks\\n\\nWe agree with the reviewer that additional conditions can enhance control over the generation process. However, in this work, we focus on the challenge of **condition-free generation** to provide greater flexibility and ease of use. While incorporating extra conditions into ConceptFlow is feasible, we plan to explore this in future research.\"}", "{\"title\": \"Response (1/2)\", \"comment\": \"We thank the reviewer for the constructive comments and suggestions. We address your concerns point by point as follows.\\n\\n### W1. The innovation of ConceptFlow\\n\\nFor this point, please kindly see our global response above.\\n\\n### W2. The identity dissimilarity or unsatisfactory aesthetics on generated outputs\\n\\nWe agree with the reviewer that the identity dissimilarity or aesthetic quality of the generated outputs can vary, and one factor influencing this is the **generator seed**. In image generation, different seeds can result in distinct variations of the same prompt, and multiple attempts may yield better or more aesthetically pleasing results. \\n\\n### W3, Q2. Expanding the Dataset and Subject Variety\\n\\nWe thank the reviewer of the suggestion. Our dataset is heavily collected from the datasets of previous works on multi-concept generation [1,2,3] for human concepts, where they mainly use **a small set from 10-20 concepts**, and from DreamBench [4] for objects and animals, where we choose the concepts with intricate details for experiments.\\n\\n[1] Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models, NeurIPS 2024\\n\\n[2] OMG: Occlusion-friendly Personalized Multi-concept Generation in Diffusion Models, ECCV 2024\\n\\n[3] FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition, CVPR 2024\\n\\n[4] DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation, CVPR 2023\\n\\n### Q1. ArcFace metric for identity preservation evaluation on human concepts\\n\\nWe thank the reviewer for the suggestion. We provide the ArcFace score for identity preservation evaluation on human concepts for both single and multiple concepts generation in the tables below:\\n\\n*Single concept generation:*\\n\\n| Methods | ArcFace |\\n|------------------|---------|\\n| DreamBooth | 0.305 |\\n| Custom Diffusion | 0.172 |\\n| DisenBooth | 0.257 |\\n| ED-LoRA | 0.37 |\\n| LoKr | 0.375 |\\n| **ConceptFlow-S** | **0.397** |\\n\\n*Multiple concepts generation:*\\n\\n| Method | ArcFace |\\n|------------------|---------|\\n| Mix-Of-Show | 0.223 |\\n| Custom Diffusion | 0.169 |\\n| FreeCustom | 0.142 |\\n| OMG | 0.234 |\\n| **ConceptFlow-M** | **0.306** |\\n\\nAccording to the results, ConceptFlow-S and ConceptFlow-M achieve a **higher ArcFace score** than the baseline methods, indicating that ConceptFlow can **better preserve the identity of human concepts**. This finding further demonstrates the effectiveness of our proposed **attention regularization (AR)** in ConceptFlow-S and **SAMA module** in ConceptFlow-M. Please kindly see **Section 3** and **Section 4** of our [**rebuttal manuscript**](https://anonymous.4open.science/r/ConceptFlow_ICLR25_Rebuttal-2411) for more analysis.\\n\\n### Q3. Comparisons between SAMA and DreamMatcher's AMA in ConceptFlow-M pipeline\\n\\nWe thank the reviewer for the suggestion. We showcase the qualitative results of this comparison in **Section 4** of our [**rebuttal manuscript**](https://anonymous.4open.science/r/ConceptFlow_ICLR25_Rebuttal-2411), and the quantitative results are as follow:\\n\\n| Method | DINO | CLIP-T | ArcFace |\\n|----------------|------|--------|---------|\\n| ConceptFlow-M with AMA | 0.442 | 0.728 | 0.236 |\\n| **ConceptFlow-M with SAMA** | **0.454** | **0.784** | **0.306**\\n\\nThe results show that **SAMA in ConceptFlow-M significantly outperforms AMA in DreamMatcher** in terms of both DINO and ArcFace scores, indicating that SAMA can **better preserve the identity of each concept**, especially for human concepts, in multi-concept images.\"}", "{\"title\": \"Response (2/2)\", \"comment\": \"### Q4. Time efficiency and computational overhead\", \"we_provide_the_time_efficiency_and_computational_overhead_of_conceptflow_in_the_tables_below\": \"*Single concept learning:*\\n\\nWe adopt an image adapter from DisenBooth and we store the attention maps of concept tokens for attention regularization (AR). The training time and the training memory of ConceptFlow-S are reported in the table below.\\n\\n| Method | Training Time | Training Memory | Note |\\n|------------------|---------------|-----------------|------|\\n| DreamBooth | 10 min | 21 GB | Fine-tune the whole SD U-Net |\\n| Custom Diffusion | 6 min | 16.5 GB | Fine-tune K-V in cross-attn layers of U-Net |\\n| DisenBooth | 15 min | 8.8 GB | Fine-tune U-Net with LoRA ($r=4$), image adapter |\\n| ED-LoRA | 8 min | 8.5 GB | Fine-tune U-Net with LoRA ($r=4$), embedding decomposition (ED) |\\n| LoKr | 6 min | 6 GB | Fine-tune U-Net with KronA ($a_1=a_2=16$) |\\n| **ConceptFlow-S** | 15 min | 10 GB | Fine-tune U-Net with KronA ($a_1=a_2=16$), ED, image adapter, AR |\\n\\n*Multiple concepts generation:*\\n\\nConceptFlow-M actually has to generate $(n + 1)$ images ($n$ reference images and 1 target image) at the same time for a prompt that involves $n$ concepts. We accomodate a loop for reference images generation to reduce the memory usage but it may increase the sampling time.\\n\\n| Method | Fusion Time | Fusion Memory | Sampling Time | Sampling Memory |\\n| ----------------- | ----------- | -------------|---------------|-----------------|\\n| Mix-Of-Show | 5 min | 16.4 GB | 2 s | 6.4 GB |\\n| CustomDiffusion | 2 min | 0 | 1.75 s | 6.2 GB |\\n| FreeCustom | 0 | 0 | 20 s | 19 GB |\\n| OMG | 0 | 0 | 15 s | 13.5 GB |\\n| **ConceptFlow-M** | 5 min | 16.4 GB | 9.3 s | 25.7 GB |\"}", "{\"summary\": \"This paper introduces ConceptFlow to balance identity preservation with prompt alignment, alleviating identity loss and concept omission. The proposed KronA-WED adapter combined with a novel attention regularization objective balances reconstruction and editability\\nfor single-concept generation. Additionally, the subject-adaptive matching attention module and layout consistency guidance strategy help prevent concept omission in multi-concept generation. Experimental results indicate that the proposed method has a positive impact on these issues to some extent.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. This paper collects a small dataset.\\n2. The proposed method achieves good identity preservation.\", \"weaknesses\": \"1. Only a limited number of comparison methods, especially in multi-concept generation, does not adequately support the authors' claim. Why does this paper compare against so few published works? Let's leave aside arXiv papers, there are many accepted, peer-reviewed papers should be considered.\\n\\n[1] FastComposer: Tuning-Free Multi-subject Image Generation with Localized Attention, IJCV \\n\\n[2] FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition, CVPR 2024 \\n\\n[3] Key-Locked Rank One Editing for Text-to-Image Personalization, SIGGRAPH 2023\\n\\n[4] Concept Weaver: Enabling Multi-Concept Fusion in Text-to-Image Models, CVPR2024\\n\\n[5] OMG: Occlusion-friendly Personalized Multi-concept Generation in Diffusion Models, ECCV2024\\n\\n[6] KOSMOS-G: Generating Images in Context with Multimodal Large Language Models, ICLR2024\\n\\n2. Even in comparison with the methods listed in table, the proposed trivial method does not show significant improvement. This is particularly evident in CLIP-T, which evaluates editability and does not support the authors' claims of balancing the trade-off between reconstruction and editability. Can this paper provide metrics that evaluate this trade-off directly, rather than reporting separately when one aspect is better and the other is worse.\", \"questions\": \"1. It is good to see the new collected dataset. But for a comprehensive comprasion. Can the authors provide multi-concept generation results using CustomConcept101?\\n2. Most of papers report both CLIP-I and DINO. I believe use both metrics can better evaluate the identity preservation in different aspects.\\n3. I believe that extra conditions like boxes and masks can make generation more controllable. Of course, there are some situtations that condition-free generation is better. It would be appreciated to see the proposed can achieve better performance on both setting.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Looking forward to Reviewer's reply\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your valuable feedback during the review process. Our rebuttal carefully addressed your concerns raised in the initial review. As the deadline for the discussion phase approaches, we kindly remind you to read our rebuttal. If you have any questions, suggestions, or further clarification on any points, please feel free to reach out.\\n\\nWe look forward to your feedback and hope for a positive outcome.\\n\\nThank you for your time and consideration.\\n\\nBest regards,\\n\\nAuthors of Paper 4581\"}", "{\"metareview\": [\"**Strengths**:\", \"Comprehensive framework covering both single and multi-concept personalization.\", \"Demonstrated improvements in identity preservation and editability with robust quantitative metrics.\", \"**Weaknesses**:\", \"Limited novelty in methodology.\", \"Scalability concerns and dataset constraints impact generalizability.\", \"Incremental improvements over prior art, with some claims inadequately substantiated.\", \"Despite addressing most issues during the rebuttal, the concerns about scalability and generalizability weighed heavily in the final decision. The work was evaluated as promising but needing further development for inclusion in a top-tier venue.\"], \"additional_comments_on_reviewer_discussion\": [\"## Reviewer Feedback and Points Raised\", \"1. **Weak Comparisons** (Reviewer 2SBt, wm2m):\", \"Insufficient baseline comparisons, especially for multi-concept generation.\", \"Limited methods evaluated (only a few recent works) and lacking certain state-of-the-art methods (e.g., KOSMOS-G).\", \"2. **Metrics and Trade-off Evaluation** (Reviewer wm2m, TYid):\", \"The paper claims to balance identity preservation and prompt alignment but fails to demonstrate this with a unified trade-off metric.\", \"3. **Over-complex Framework with Limited Novelty** (Reviewer TYid):\", \"Techniques like KronA-WED, SAMA, and Layout Consistency were deemed incremental and heavily inspired by prior works (e.g., DreamMatcher, Break-A-Scene).\", \"4. **Scalability Issues** (Reviewer jVBs):\", \"Concerns about the method's scalability beyond two concepts.\", \"Limited qualitative results for more complex scenarios involving multiple concepts.\", \"5. **Dataset Limitations** (Reviewer TYid):\", \"Narrow subject variety potentially led to cherry-picking. A more diverse dataset was requested.\", \"6. **Identity Preservation Issues** (Reviewer TYid):\", \"Identity dissimilarity in certain outputs (e.g., human faces) raised concerns about robustness.\", \"7. **Computational Overhead** (Reviewer TYid):\", \"Questions about the time and memory efficiency of the proposed framework compared to others.\", \"## Author Responses and Revisions\", \"1. **Additional Experiments and Baselines**:\", \"Added comparisons with methods like FreeCustom and OMG, with improved results showcased using metrics like ArcFace.\", \"Clarified reasons for excluding some methods (e.g., KOSMOS-G due to lack of reproducibility or computational constraints).\", \"2. **Trade-off Demonstrations**:\", \"Introduced an F1 score to balance identity preservation (DINO) and editability (CLIP-T), demonstrating improvements over other methods.\", \"3. **Technical Clarifications**:\", \"Highlighted how SAMA differs from DreamMatcher\\u2019s AMA, focusing on the inclusion of a concept foreground mask, which showed improved performance.\", \"4. **Scalability and Limitations**:\", \"Acknowledged limitations in handling more than two concepts and proposed future work to address scalability challenges.\", \"5. **Dataset and Subject Variety**:\", \"Acknowledged limitations in dataset diversity and committed to broader evaluations in future work.\", \"6. **Efficiency Analysis**:\", \"Provided computational benchmarks, demonstrating competitive efficiency compared to baseline methods like DisenBooth and DreamBooth.\"]}", "{\"title\": \"Looking forward to Reviewer's reply\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your valuable feedback during the review process. Our rebuttal carefully addressed your concerns raised in the initial review. As the deadline for the discussion phase approaches, we kindly remind you to read our rebuttal. If you have any questions, suggestions, or further clarification on any points, please feel free to reach out.\\n\\nWe look forward to your feedback and hope for a positive outcome.\\n\\nThank you for your time and consideration.\\n\\nBest regards,\\n\\nAuthors of Paper 4581\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe are happy when your concerns were addressed.\\n\\nRegarding generating 3 or 4 concepts, our work mainly tackles generating two concepts, which is the norm in multiple concept generation [1][2]. When generating more than two concepts, the results of our framework tends to unsatisfied. We found that this is due to the limitations of Stable Diffusion 1.5, which our framework is based on, in handling complex prompts. Currently, this limitation restricts our framework\\u2019s ability to handle prompts with too many concepts. \\n\\nPlease kindly see **Section 5.2** in our [**rebuttal PDF file**](https://anonymous.4open.science/r/ConceptFlow_ICLR25_Rebuttal-2411) for more analysis and illustration figures.\\n\\n[1] Multi-Concept Customization of Text-to-Image Diffusion, CVPR 2023\\n\\n[2] OMG: Occlusion-friendly Personalized Multi-concept Generation in Diffusion Models, ECCV 2024\\n\\nBest regards,\\n\\nAuthors of Paper 4581\"}", "{\"title\": \"My concerns\", \"comment\": \"Most of the concerns I raised remain unaddressed.\\n\\nW1. The comparison is still weak. KOSMOS-G, FastComposer, Key-Locked Rank One Editing for Text-to-Image Personalization are publicly available and have training scripts. I can find their github repository.\\n\\nW2. \\\"There is currently no work that directly addresses the metric evaluating the trade-off between reconstruction and editability.\\\" As mentioned in the paper, the proposed method takes a good balance. However, I struggle to understand what this balance is (While you achieve a higher DINO score, the performance on CLIP-T is lower). It seems you do not outperform on both metrics simultaneously, and I don\\u2019t think a simple average score can adequately represent overall performance, as the metrics are fundamentally different in their measurements. Could you clarify this further? I believe introducing a new metric to evaluate the trade-off would be a significant contribution to this paper. A paper that can be accepted by a top-tier conference (i.e., ICLR) is not to say no existing works do something, but rather by providing convincing evidence for your claims (a good trade-off) with a higher DINO score and a lower CLIP-T score. By mentioning this, I don't mean you cannot use traditional metrics. However, you need to ensure the paper convincingly demonstrates how a higher DINO score and a lower score align with the claims made about achieving a balanced trade-off.\\n\\nQ1 and Q3 are left as future work, but I believe addressing them would enhance this paper significantly rather than deferring them to future work.\"}", "{\"summary\": \"The manuscript introduces propose two strategies \\\"ConceptFlow-S\\\" and \\\"ConceptFlow-M\\\" for single-concept generation and multiple-concept generation to resolve the limitation of identity preservation, prompt alignment, and concept omission from previous methods. For single concept, the authors introduce KronA-WED adapter with the attention regularization objective. For multiple concept, the authors propose the SAMA module and layout the consistency guidance. This method facilitates rapid and efficient single and multi-subject customization. The results demonstrate that the method is competitive with leading frameworks in various image generation tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The work identifies the issues in both single and multiple concept personalization such as trade off between reconstruction and edibility, and identity loss and the issue of concept missing.\\n\\n2. The authors have developed a framework for generating personalized images that effectively integrates strategies to solve both single (ConceptFlow-S) and multiple concept (ConceptFlow-M). \\n\\n2. The paper provides experimental results, including both quantitative and qualitative assessments, showcasing the superior performance of the framework. The results clearly highlight the effectiveness of the proposed method in facilitating personalized image generation.\", \"weaknesses\": \"1. In Figure 5, the method utilizes SAMA module to enhance the identity details. However, it is unclear how it would perform with fine-grained subjects (two dogs or two cats with different breed). Also, I wonder how this module would work when the concept size increases. Clarification is needed on whether the module can effectively manage such fine distinctions and multiple diverse subjects.\\n\\n2. Recent methodologies [1, 2] have demonstrated the capability to learn multi-concept personalization, it remains uncertain if the proposed work can handle multiple personalized instances (> 2), particularly for contexts involving up to five subjects. Although quantitative results for two subjects are provided in both paper and appendix, the absence of qualitative results for three or more subjects in both the main text and appendix might be a notable omission. Including these results would substantiate the method's capability in more complex scenarios.\\n\\n\\n[1] Liu, Zhiheng, et al. \\\"Cones 2: Customizable image synthesis with multiple subjects.\\\" arXiv preprint arXiv:2305.19327 (2023).\\n\\n[2] Yeh, Chun-Hsiao, et al. \\\"Gen4Gen: Generative Data Pipeline for Generative Multi-Concept Composition.\\\" arXiv preprint arXiv:2402.15504 (2024).\", \"questions\": \"Given the concerns mentioned, particularly around the method's scalability to more complex multi-subject personalizations and the clarification behind module choices, I recommend a \\\"marginally below the acceptance threshold\\\" for this paper. Enhancements in demonstrating multi-subject capabilities, clarity in embedding visualization, and justification for the choice of technology could potentially elevate the manuscript to meet publication standards.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Looking forward to Reviewer's reply\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your valuable feedback during the review process. Our rebuttal carefully addressed your concerns raised in the initial review. As the deadline for the discussion phase approaches, we kindly remind you to read our rebuttal. If you have any questions, suggestions, or further clarification on any points, please feel free to reach out.\\n\\nWe look forward to your feedback and hope for a positive outcome.\\n\\nThank you for your time and consideration.\\n\\nBest regards,\\n\\nAuthors of Paper 4581\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Author's Rebuttal\", \"comment\": \"Thanks for the authors' response. I have no further comments regarding the SAMA-related questions. However, I still have concerns about the multiple concept generation aspect. While the authors mention that ConceptFlow focuses on advancing condition-free generation and that addressing this limitation is a potential direction for future work, I believe this issue is closely related to the current work and should not be deferred entirely to future exploration.\\n\\nHave the author's tried to generate 3 or 4 different concepts (condition-free)? I am still curious that the method could have controllability to handle the generation beyond 2 concepts (mostly shown in the paper).\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe're pleased to address your concerns. Can you please consider update your rating?\\n\\nThank you for your time and consideration.\\n\\nBest regards,\\n\\nAuthors of Paper 4581\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe appreciate your fast and thoughtful reply. We would like to clarify some details as follows:\\n\\n### W1. Comparisons with other methods for multi-concept generation\", \"for_the_methods_you_mentioned\": \"- KOSMOS-G: The URLs provided for their trained weights are incorrect. Retraining their model would require significant computational resources (they used OpenImages V7 and InstructPix2Pix datasets).\\n- FastComposer: The authors trained their model specifically for human concepts on the FFHQ dataset. Since we evaluate ConceptFlow on a broader range of concepts, including animals, objects, and humans, using their trained weights is not suitable. Additionally, training their model on a different dataset raises concerns about fairness.\\n- Key-Locked Rank One Editing for Text-to-Image Personalization: We were only able to find an unofficial implementation of this method at [this repository](https://github.com/ChenDarYen/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization). However, the codes there can not reproduce the multi-concept generation results presented in their paper.\\nThere are also several issues mentioned in that repository regarding this problem, such as [this issue](https://github.com/ChenDarYen/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/issues/9).\\nAs a result, using this implementation would not be a fair comparison.\\n\\nWe also tried our best to compare our method with existing methods in multi-concept generation, including Custom Diffusion [1], FreeCustom [2], Mix-Of-Show [3], and OMG [4]. Please see the results in Table 1 in our \\n [**rebuttal PDF file**](https://anonymous.4open.science/r/ConceptFlow_ICLR25_Rebuttal-2411). Our method achieved better performance than these state-of-the-art methods.\\n\\n[1] Multi-Concept Customization of Text-to-Image Diffusion, CVPR 2023\\n\\n[2] FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition, CVPR 2024\\n\\n[3] Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models, NeurIPS 2024\\n\\n[4] OMG: Occlusion-friendly Personalized Multi-concept Generation in Diffusion Models, ECCV 2024\\n\\n\\n### W2. What \\\"balance trade-off\\\" means\\n\\nFine-tuning-based methods often face challenges in balancing the **trade-off** between reconstruction and editability, i.e. **achieving a higher DINO score can result in a decrease in the CLIP-T score, and vice versa**. Additionally, qualitative results (Figure 1 in our [**rebuttal manuscript**](https://anonymous.4open.science/r/ConceptFlow_ICLR25_Rebuttal-2411/ConceptFlow_Rebuttal_ICLR25.pdf) underscore this trade-off: DreamBooth achieves strong reconstruction fidelity but poorly aligns with prompts, while Custom Diffusion aligns well with prompts but exhibits weak reconstruction quality.\\n\\nQuantitatively, Table 1 of our rebuttal manuscript provides further evidence. Comparing with ED-LoRA [1], DreamBooth [2] achieves a higher DINO score but suffers from a lower CLIP-T score. A similar trend is observed across other methods in this table.\\n\\nIn contrast, ConceptFlow-S achieves both higher DINO and CLIP-T scores compared to parameter-efficient fine-tuning methods (ED-LoRA [1] and LoKR [3]), demonstrating that **ConceptFlow-S can simultaneously enhance these two capabilities**.\\n\\nWhile ConceptFlow-S has a lower CLIP-T score compared to Custom Diffusion [4] and DisenBooth [5], **it significantly outperforms these methods in DINO score**. These methods overlooked the importance of reconstruction capability. Additionally, when compared to DreamBooth [2], ConceptFlow-S showcases a **competitive DINO score along with a better CLIP-T score**. DreamBooth [2] is often affected by overfitting, which limits its ability to effectively balance the trade-off.\\n\\nFurthermore, as the suggestion by Reviewer wm2m, we compute **the F1 score between DINO and CLIP-T as a provisional metric directly evaluating the trade-off** between reconstruction and editability. Our method outperforms existing methods in terms of F1 score, highlighting the superior balance of our method. Please see the results in Table 1 in our \\n [**rebuttal PDF file**](https://anonymous.4open.science/r/ConceptFlow_ICLR25_Rebuttal-2411).\\n\\n\\n[1] Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models, NeurIPS 2024\\n\\n[2] DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation, CVPR 2023\\n\\n[3] Navigating Text-To-Image Customization: From LyCORIS Fine-Tuning to Model Evaluation, ICLR 2024\\n\\n[4] Multi-Concept Customization of Text-to-Image Diffusion, CVPR 2023\\n\\n[5] DisenBooth: Identity-Preserving Disentangled Tuning for Subject-Driven Text-to-Image Generation, ICLR 2024\", \"title\": \"Response (1/2)\"}", "{\"summary\": \"This paper introduces the ConceptFlow framework, including the ConceptFlow-S component for robust single-concept learning and generation, and the ConceptFlow-M component to create images of multiple concepts without the need for spatial guidance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"ConceptFlow-M needs no spatial guidance and can somehow handle the missing concept issue, which is important in multi-concept personalized image generation.\", \"weaknesses\": \"1. This work claims to be a unified framework of personalized image generation in the title, but there is no presentation about how it **unifies** personalized image generation.\\n2. The contribution of the proposed method is weak. There are too many modules that already exist with no relation to each other.\\n3. The comparison experiments are not rational. Since the proposed model introduces an image adapter, it should compare with the methods using the image adapter. In fact, as a fine-tune-based method, there is no need to use an image adapter since it needs an image input during the inference. I suggest the authors compare the proposed method with some zero-shot methods like IP-Adapter [1], SSR-Encoder [2], and MS-Diffusion [3], which can also solve the same issues.\\n4. The experiments are completely not enough with few baselines and qualitative examples.\\n\\n[1] Ye, Hu, et al. \\\"Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models.\\\" arXiv preprint arXiv:2308.06721 (2023). \\n\\n[2] Zhang, Yuxuan, et al. \\\"Ssr-encoder: Encoding selective subject representation for subject-driven generation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. \\n\\n[3] Wang, X., et al. \\\"MS-Diffusion: Multi-subject Zero-shot Image Personalization with Layout Guidance.\\\" arXiv preprint arXiv:2406.07209 (2024).\", \"questions\": \"Please see the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes **ConceptFlow**, which includes two components: **ConceptFlow-S** for single-concept generation and **ConceptFlow-M** for multi-concept generation. ConceptFlow-S employs a **KronA-WED adapter** that integrates a Kronecker adapter with weight and embedding decomposition. It also introduces **attention regularization** to improve single-concept generation. ConceptFlow-M extends the models trained by ConceptFlow-S, using **SAMA** (Subject-Adaptive Matching Attention) and **layout consistency guidance** to handle multi-concept generation without requiring additional spatial conditions.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. **Comprehensive personalization framework:**\\n The paper covers both single and multi-concept personalization, supported by comparisons with recent works.\\n\\n2. **Application insights and metrics:**\\n The discussion includes practical applications, qualitative examples, and appropriate metrics for performance analysis.\", \"weaknesses\": \"1. **Over-complex Framework without Substantial Innovations:**\\n - The **KronA-WED adapter** is like a simple stack of previous strategies, **Mix-of-Show\\u2019s *ED-LoRA*** [1] $+$ **Kronecker Adapter *(KronA)*** [6] $\\\\rightarrow$ **KronA-WED adapter**.\\n - The proposed framework looks highly similar as a blend of **Mix-of-Show\\u2019s *gradient fusion*** [1] and **DisenBooth\\u2019s disentangled learning** [2], which dilutes its novelty. \\n - Furthermore, the **attention regularization (AR)** employed seems highly similar as **Break-A-Scene's attention-based loss** [3], which is also designed to encourages that each handle attends only to the image region occupied by the corresponding concept. \\n - The **SAMA module** for multi-concept generation largely reuses existing strategies, closely mirroring **DreamMatcher\\u2019s AMA** [4] (with **Eq. 6-9** resembling **Eq. 3, 4, 5, and 7** from DreamMatcher) but with minor modifications, such as adding a **foreground mask**. \\n - The design of **Layout Guidance** follows the core idea of **A-Star's Attention Segregation Loss** [5], also not a novel technique.\\n\\n This accumulation of existing techniques makes the work feel like an industrial assembly, which reduces the perceived novelty of the contribution.\\n\\n2. **Identity Preservation Issues and Low-Level Aesthetics:** \\n - Several generated outputs display **identity dissimilarity** or **unsatisfactory aesthetics**, such as Justin Bieber, Emma Stone, cat in **Fig. 1**, which raise my concerns about the soundness of the proposed method and the robustness of the framework when dealing with **diverse or intricate concepts**.\\n\\n3. **Dataset Limitations and Generalizability:** \\n - The selection of subjects in the manuscript are limited (**24 subjects (Fig. 12)**), which is limited and potentially biased toward **cherry-pick cases**. I believe these limited subjects can not represent the full spectrum of real-world complexities, and a broader test set covering **more diverse concepts and edge cases** would provide stronger evidence of the framework's generalizability.\\n\\n**Reference**\\n\\n[1] *Gu, et al. \\\"Mix-of-show: Decentralized low-rank adaptation for multi-concept customization of diffusion models.\\\" NeurIPS (2024).*\\n\\n[2] *Chen, et al. \\\"Disenbooth: Identity-preserving disentangled tuning for subject-driven text-to-image generation.\\\" arXiv (2023).*\\n\\n[3] *Avrahami, et al. \\\"Break-a-scene: Extracting multiple concepts from a single image.\\\" SIGGRAPH 2023.*\\n\\n[4] *Nam, et al. \\\"Dreammatcher: Appearance matching self-attention for semantically-consistent text-to-image personalization.\\\" CVPR (2024).*\\n\\n[5] *Agarwal, et al. \\\"A-star: Test-time attention segregation and retention for text-to-image synthesis.\\\" ICCV (2023).*\\n\\n[6] *Edalati, Ali, et al. \\\"Krona: Parameter efficient tuning with kronecker adapter.\\\" arXiv (2022).*\", \"questions\": \"1. **Enhancing Identity Matching in Evaluation:**\\n - The current evaluation focuses on **DINO scores** and **CLIP-T metrics**, but these might not capture **fine-grained identity preservation**, especially for faces. Have you considered incorporating face recognition tools like **FaceNet** or **ArcFace** [7]? These tools could offer bounding-box-level face similarity evaluation, providing a more precise measurement of face identity consistency.\\n\\n\\n2. **Expanding the Dataset and Subject Variety:** \\n - As stated in **Weakness 3**, The number of subjects used for both quantitative and qualitative evaluations appears limited (**Fig. 12**). This narrow selection may affect the statistical reliability of the results. Could you **experiment on more subjects or scenarios** to validate the performance of ConceptFlow across a broader range of real-world cases?\\n\\n3. **Innovative Contributions of SAMA Compared to DreamMatcher:** \\n - The *SAMA module* seems closely related to DreamMatcher\\u2019s *Appearance Matching Self-Attention (AMA)*, with the primary distinction being the use of a *concept foreground mask ($\\\\mathbf{M}_k$)*. Could you further conduct **ablation studies on the AMA vs. SAMA** to show how it extends beyond DreamMatcher's approach?\\n\\n\\n4. **Time Efficiency and Computational Overhead:** \\n - ConceptFlow involves multiple steps, such as *gradient fusion, SAMA matching, and layout consistency guidance*, which could introduce computational overhead. Could you provide a **memory / time efficiency comparison** between ConceptFlow and other state-of-the-art methods, such as **LoKr [8], DisenBooth [2], and Mix-of-Show [1]**? This would clarify the practical trade-offs between performance and efficiency for both single and multi-concept generation.\\n\\n\\n**Reference**\\n\\n[7] *https://github.com/deepinsight/insightface*.\\n\\n[8] *Yeh, et al. \\\"Navigating text-to-image customization: From lycoris fine-tuning to model evaluation.\\\" ICLR (2023)*.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response (2/2)\", \"comment\": \"### Q1 and Q3:\\n\\nRegarding **conducting experiments on the CustomConcept101 dataset**, this is a major request. ***This dataset is too large to conduct experiments within this limited timeframe of the rebuttal process***. In addition, our work mainly handle the interaction between the human and object while this dataset has a few human concepts. However, we can add the experiments in the camera ready.\\n\\nRegarding **adding extra conditions**, this is a major request. Our work tackles the issues of ***condition-free generation*** to reduce the constraints for users. We aim to provide greater flexibility and ease of use than works needing conditions like boxes and masks. ***Integrating extra conditions is out-of-the-scope of this work***. Adding conditions is not simply plug-and-play modules. It require to modify and optimize some parts in the framework to achieve good performance, which takes time and not suitficient for limited timeframe of the rebuttal process. Therefore, we'll leave this part for the future work.\\n\\nWe hope these clarifications resolve your concerns and are happy to address further questions. \\n\\nBest regards, \\n\\nAuthors of Paper 4581\"}", "{\"comment\": \"Thank you for your rebuttal.\\n\\n\\\"Integrating extra conditions is out-of-the-scope of this work.\\\" which means that your work is difficult to achieve a more flexible image generation (One may want to control the region of generation sometimes, and not other times). While there are so many works enabling both conditional and condition-free generation, this work is specifically limited to condition-free generation.\\n\\nIn the paper, you state \\\"When it comes to multiple concepts, creating images from a single prompt without extra conditions, such as layout boxes or semantic masks, is problematic due to significantly identity loss and concept omission.\\\" While this is an important challenge to address, it would be ideal to do so without sacrificing the condition-based generation ability. For a top-tier conference like ICLR, I would like to see submissions that can address the question the submission raised without compromising its existing strengths. It would be a stronger contribution than tackling one issue at the expense of another.\\n\\nBests,\\nReviewer wm2m\"}", "{\"title\": \"Response to Authors' Rebuttal: Maintaining Current Score\", \"comment\": \"The rebuttal addressed most of my questions and concerns, and I appreciate the clear explanation. I would like to maintain my current score.\"}", "{\"title\": \"Looking forward to Reviewer's reply\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your valuable feedback during the review process. Our rebuttal carefully addressed your concerns raised in the initial review. As the deadline for the discussion phase approaches, we kindly remind you to read our rebuttal. If you have any questions, suggestions, or further clarification on any points, please feel free to reach out.\\n\\nWe look forward to your feedback and hope for a positive outcome.\\n\\nThank you for your time and consideration.\\n\\nBest regards,\\n\\nAuthors of Paper 4581\"}", "{\"title\": \"Response (1/2)\", \"comment\": \"We thank the reviewer for the constructive comments and suggestions. We address your concerns point by point as follows.\\n\\n### W1. Comparisons with other methods for multi-concept generation\\n\\nWe thank the reviewer for the suggestion. We have included comparisons between ConceptFlow-M, FreeCustom [1], and OMG [2] for multi-concept generation in **Section 1** of our [**rebuttal manuscript**](https://anonymous.4open.science/r/ConceptFlow_ICLR25_Rebuttal-2411), as other methods either do not release their code or trained weights. Additionally, we use ArcFace metrics to evaluate identity preservation for human concepts.\\n\\nIt is noteworthy that in a **condition-free setting**, ConceptFlow **outperforms** most other methods when generating multi-concept images.\\n- For FreeCustom [1], this method requires reference images for concepts in the correct positions where it expects they to appear in the output image. As a result, using images with the concept centered, like those from a personalization dataset, leads to significant artifacts.\\n- For OMG [2], a two-stage approach is used: first generating the layout and then integrating concept details into the layout. The main drawback of this approach occurs when the shape of the generated layout differs from the actual shape of the concept.\\n\\n[1] OMG: Occlusion-friendly Personalized Multi-concept Generation in Diffusion Models, ECCV 2024\\n\\n[2] FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition, CVPR 2024\\n\\n### W2. Metrics that evaluate the trade-off directly\\n\\nWe thank the reviewer for the suggestion.\\nSince there is currently no work that directly addresses the metric evaluating the trade-off between reconstruction and editability, we compute the **F1 score between DINO and CLIP-T** as a provisional metric in the table below. The results show that **both ConceptFlow-S and ConceptFlow-M strike a better balance** between reconstruction (identity preservation) and editability (prompt alignment) compared to other methods.\\n\\n*Single concept generation:*\\n\\n| Method | DINO | CLIP-T | **F1-Score** |\\n|------------------|-------|--------|----------|\\n| DreamBooth | **0.684** | 0.678 | 0.681 |\\n| Custom Diffusion | 0.503 | **0.784** | 0.613 |\\n| DisenBooth | 0.616 | *0.743* | 0.674 |\\n| ED-LoRA | 0.667 | 0.703 | 0.685 |\\n| LoKr | 0.679 | 0.688 | 0.683 |\\n| **ConceptFlow-S** | *0.682* | 0.706 | **0.694** |\\n\\n*Multiple concepts generation:*\\n\\n| Method | DINO | CLIP-T | **F1-Score** |\\n|------------------|-------|-------|-------|\\n| Mix-Of-Show | 0.436 |0.779 | 0.559 |\\n| Custom Diffusion | 0.369 | **0.802** | 0.505 |\\n| FreeCustom | 0.369 | 0.722 | 0.480 |\\n| OMG | 0.357 | 0.732 | 0.480 |\\n| **ConceptFlow-M** | **0.454** | 0.784 | **0.575** |\\n\\nAdditionally, we incorporate the ArcFace score to assess identity preservation for human concepts in **Section 1** of our [**rebuttal manuscript**](https://anonymous.4open.science/r/ConceptFlow_ICLR25_Rebuttal-2411). This metric also demonstrates the effectiveness in preversing human facial details of ConceptFlow-S and ConceptFlow-M.\\n\\n### Q1. Multi-concept generation results on the CustomConcept101 dataset\\n\\nWe thank the reviewer of the suggestion. \\nOur dataset is heavily collected from the datasets of previous works on multi-concept generation [1,2,3] for human concepts, where they mainly use **a small dataset from 10-20 concepts**, and from DreamBench [4] for objects and animals, where we choose the concepts with intricate details for experiments.\\nTherefore, conducting experiments on the CustomConcept101 dataset for ConceptFlow and other baseline methods for comparison would require substantial effort within this limited timeframe. However, we will take this suggestion into account for future work.\\n\\n[1] Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models, NeurIPS 2024\\n\\n[2] OMG: Occlusion-friendly Personalized Multi-concept Generation in Diffusion Models, ECCV 2024\\n\\n[3] FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition, CVPR 2024\\n\\n[4] DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation, CVPR 2023\"}", "{\"title\": \"Global Response (2/2)\", \"comment\": \"## Core contributions and novelty\\n\\n### 1. Develop a novel yet robust single-concept learning pipeline.\\n\\nWe introduce **KronA-WED** adapter, which leverages KronA [1], DORA [2] and embedding decomposition [3]:\\n- KronA [1] relaxes the low-rank assumption in LoRA [4], improving the learning capability of parameter-efficient fine-tuning (PEFT) process while maintaining a smaller model size than LoRA.\\n- DORA [2] further enhances the performance gains from PEFT methods via the weight decomposition technique.\\n- Embedding decomposition [3] has an important role for the success of Gradient Fusion [5], which we use to combine the individual weights learned by ConceptFlow-S. This merged weight is then utilized at the beginning of ConceptFlow-M.\\n\\nWe leverage Disentangled learning [6] to prevent the model from learning concept-irrelevant details, such as background scenes, thus enhancing editability. We employ an image adapter with three loss objectives **only during the finetuning process**, but not during inference like existing methods [7, 8, 9].\\n\\n[1] KronA: Parameter Efficient Tuning with Kronecker Adapter, NeurIPS 2023 Workshop\\n\\n[2] DoRA: Weight-Decomposed Low-Rank Adaptation, ICML 2024\\n\\n[3] P+: Extended Textual Conditioning in Text-to-Image Generation, arXiv 2023\\n\\n[4] LoRA: Low-Rank Adaptation of Large Language Models, ICLR 2022\\n\\n[5] Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models, NeurIPS 2024\\n\\n[6] DisenBooth: Identity-Preserving Disentangled Tuning for Subject-Driven Text-to-Image Generation, ICLR 2024\\n\\n[7] Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models, arXiv preprint arXiv:2308.06721 (2023).\\n\\n[8] Ssr-encoder: Encoding selective subject representation for subject-driven generation. CVPR 2024\\n\\n[9] MS-Diffusion: Multi-subject Zero-shot Image Personalization with Layout Guidance, arXiv preprint arXiv:2406.07209 (2024).\\n\\n### 2. Novel attention regularization (AR) in ConceptFlow-S\\n\\nAttention regularization during fine-tuning Stable Diffusion is a **widely used technique** for adjusting attention maps of specific tokens in various tasks, such as subject-driven personalization [1,2], relation inversion [3], and concept erasure [4].\\n\\nIn ConceptFlow-S, we propose a novel, part-of-speech-inspired method to handle the attention maps of the adjective token ($V_{rand}$) and the noun token ($V_{class}$) of a concept differently. Specifically:\\n- The noun token $V_{class}$ should align with the extracted masks from training input images\\n- The adjective token $V_{rand}$ can activate **some specific regions** within those masks.\\n\\nThis approach is **particularly effective for learning human concepts**, where the \\\"identity\\\" of the desired concept can vary in each training image (e.g., different clothing or hairstyles). The relaxed constraints on the adjective token allow the model to better capture the common features of the concept, which, in the case of human concepts, is primarily the face (please see **Section 3** in our [**rebuttal manuscript**](https://anonymous.4open.science/r/ConceptFlow_ICLR25_Rebuttal-2411) for experimental results).\\n\\n[1] Break-A-Scene: Extracting Multiple Concepts from a Single Image, SIGGRAPH Asia 2023\\n\\n[2] FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention, IJCV 2024\\n\\n[3] Customizing Text-to-Image Generation with Inverted Interaction, ACM MM\\u201924\\n\\n[4] MACE: Mass Concept Erasure in Diffusion Models, CVPR 2024\\n\\n### 3. Subject Adaptive Matching Attention (SAMA) in ConceptFlow-M\\n\\nInspired by the AMA module in DreamMatcher [1], we extend this strategy to multi-concept generation in ConceptFlow-M to **address the identity loss problem** for concepts in multi-concept images.\\n\\nThe modifications from AMA involve using concept attention maps as the \\\"foreground mask\\\" to calculate the displacement fields in each warping operation, particularly in the **masking step during the calculation of matching cost volume $\\\\mathbf{C}_k$**. This enhances the accuracy of the feature similarity calculation between the target image and each reference image (please see **Section 4** in our [**rebuttal manuscript**](https://anonymous.4open.science/r/ConceptFlow_ICLR25_Rebuttal-2411) for experimental results).\\n\\n[1] DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization, CVPR 2024\\n\\n### 4. Layout consistency guidance in ConceptFlow-M\\n\\nFrom the investigation that the **concept missing issue** in personalized multi-concept generation is primarily due to the model's difficulty in retaining the layout from initial to final denoising steps, inspired by [1], we develop a simple yet effective **layout consistency guidance** mechanism in ConceptFlow-M.\\n\\n[1] A-STAR: Test-time Attention Segregation and Retention for Text-to-image Synthesis, ICCV 2023.\"}", "{\"title\": \"Response (1/1)\", \"comment\": \"We thank the reviewer for the constructive comments and suggestions. We address your concerns point by point as follows.\\n\\n### W1, W2. How ConceptFlow unifies personalized image generation. The contribution of the proposed method and the relation of modules.\\n\\nFor these two points, please kindly see our global response above.\\n\\n### W3. Compare with methods that also use image adapter\\n\\nFirstly, in ConceptFlow-S, we adopt the image adapter and related loss objectives from DisenBooth [1] **during fine-tuning process** for better disentangled learning of identity-relevant andi identity-irrelevant information. **We do not use the image adapter for inference** like other zero-shot methods [2,3,4]. \\n\\nWe present a qualitative comparison between ConceptFlow-S and zero-shot methods [2,3] in **Section 2** of our [**rebuttal manuscript**](https://anonymous.4open.science/r/ConceptFlow_ICLR25_Rebuttal-2411). Since MS-Diffusion [4] does not rely on Stable Diffusion 1.5, we do not include them in the comparison. The results show that **ConceptFlow-S generates images with superior identity preservation for concepts with intricate details**, which is a challenge commonly faced by zero-shot methods. We remark that Ip-adapter [3] and MS-Diffusion [4] have not been published in peer-review venues.\\n\\n[1] DisenBooth: Identity-Preserving Disentangled Tuning for Subject-Driven Text-to-Image Generation, ICLR 2024\\n\\n[2] Ssr-encoder: Encoding selective subject representation for subject-driven generation, CVPR 2024\\n\\n[3] Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models, arXiv preprint arXiv:2308.06721 (2023).\\n\\n[4] MS-Diffusion: Multi-subject Zero-shot Image Personalization with Layout Guidance, arXiv preprint arXiv:2406.07209 (2024).\\n\\n### W4. More experiments results\\n\\nWe thank the reviewer for the suggestion. Please see the **Section 1** in our [**rebuttal manuscript**](https://anonymous.4open.science/r/ConceptFlow_ICLR25_Rebuttal-2411), where we add additional experiments results, especially for ConceptFlow-M, to demonstrate the effectiveness of the ConceptFlow framework.\"}", "{\"title\": \"Response (1/1)\", \"comment\": \"We thank the reviewer for the constructive comments and suggestions. We address your concerns point by point as follows.\\n\\n### W1. How SAMA performs with fine-grained subjects (two dogs or two cats with different breeds)\\n\\nFirstly, the effectiveness of SAMA mainly relies on the accuracy of concept masks, i.e. **the cross-attention maps of their tokens**. We consider the combinations such as two dogs, two cats, dog and cat, two humans, as **semantically semantic concepts**. In these cases, Stable Diffusion 1.5 (SD 1.5) often suffers from **attribute binding problem** [1], where the attention map of one token is incorrectly focused on the other token even at the begining of the denoising process. Therefore, SAMA, and also ConceptFlow-M, tends to fail in these cases. We carefully mention this in **Section 5.1** of our [**rebuttal manuscript**](https://anonymous.4open.science/r/ConceptFlow_ICLR25_Rebuttal-2411).\\n\\nNonetheless, for the case of **concepts with intricate details**, SAMA demonstrates it capability of preserving complex patterns, as shown in **Section 3** of our [**rebuttal manuscript**](https://anonymous.4open.science/r/ConceptFlow_ICLR25_Rebuttal-2411).\\n\\n[1] Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis, ICLR 2023\\n\\n### W1. How SAMA works when the concept size increases\\n\\nIn theory, our proposed SAMA module **can work for a general case of $N$ concepts**. Specifically:\\n- For each concept $k$ ($1 \\\\leq k \\\\leq N$):\\n - We have a process to generate one image of it (which is called *reference image*). The reference features corresponding to this concept at each timestep, $\\\\psi^{ref}_k$, are calculated from the intermediate features within U-Net of this process.\\n - Its foreground mask $\\\\mathbf{M}_k$ is extracted from the cross attention maps of its tokens in the target denoising process.\\n - The warping calculation to obtain $V_{k}^{ref \\\\to trg}$ is calculated follow the equation (6), (7) in our manuscript.\\n- From all warped values $V_{k}^{ref \\\\to trg} (k=1,2,..,N)$, we perform the blending operation using concept masks $\\\\{\\\\mathbf{M}_k\\\\}_{k=1}^N$ to obtain the final value for the self-attention calculation in the target denoising process, following equation (8) in the manuscript.\\n\\n### W2. Results on larger number than two concepts\\n\\nIn scenarios involving more than two concepts, ConceptFlow-M produces both successful and failed outputs with roughly equal probability. It is noteworthy that **most condition-free multi-concept generation methods [1,2] on Stable Diffusion 1.5 (SD 1.5) exhibit the same issue**, and they only conduct experiments with two concepts.\\nWe are convinced that this limitation stems from SD 1.5's challenges in handling complex prompts (i.e., prompts involving more than two concepts). For visualization and further analysis, please refer to **Section 5.2** in our [**rebuttal manuscript**](https://anonymous.4open.science/r/ConceptFlow_ICLR25_Rebuttal-2411).\\n\\nTo address this problem, typical approaches involve incorporating conditions such as boxes, masks, and applying local prompting and denoising [3,4]. However, in this work, *our focus for ConceptFlow is on advancing **condition-free generation***. As a result, addressing this limitation will be a potential area for future work.\\n\\n[1] Multi-Concept Customization of Text-to-Image Diffusion, CVPR 2023\\n\\n[2] Key-Locked Rank One Editing for Text-to-Image Personalization, SIGGRAPH 2023\\n\\n[3] Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models, NeurIPS 2024\\n\\n[4] OMG: Occlusion-friendly Personalized Multi-concept Generation in Diffusion Models, ECCV 2024\"}", "{\"title\": \"Global Response (1/2)\", \"comment\": \"We thank the reviewers for their constructive comments and suggestions. We are encouraged with positive feedback that ConceptFlow **effectively solves both single and multiple concept generation problems** (jVBs, 2SBt, TYid), has **good identity preservation capability** (wm2m), and has **potential applications** (TYid).\\n\\nIn our [**rebuttal manuscript**](https://anonymous.4open.science/r/ConceptFlow_ICLR25_Rebuttal-2411) (clickable), we have added additional experiments, ablation studies, analyses, and discussions based on the suggestions from all reviewers:\\n1. Additional experiments\\n2. ConceptFlow-S versus zero-shot methods in single concept generation\\n3. Attention Regularization (AR) in ConceptFlow-S versus Break-A-Scene's strategy\\n4. Subject Adaptive Matching Attention (SAMA) in ConceptFlow-M versus DreamMatcher's AMA\\n5. Limitation of ConceptFlow-M:\\n - Generations of multiple semantically similar concepts\\n - Generations of more than two concepts\\n\\nAlso, we would like to highlight the **contributions** of ConceptFlow, as well as how it **unifies personalized image generation**.\\n\\n## A comprehensive framework for personalized image generation \\n\\nFirstly, ConceptFlow-S finetunes Stable Diffusion for **robust single concept learning and generation**. We can then combine the individual weights learned by ConceptFlow-S, enabling the generation of multi-concept images through the ConceptFlow-M **without requiring of extra conditions** (e.g. boxes, masks).\\n\\nIt is noteworthy that **using weights learned by ConceptFlow-S** significantly enhances the performance of ConceptFlow-M in generating multiple concepts compared to other single-concept learning methods. Additionally, each concept only needs to be finetuned once, with the trained weights saved for future multi-concept generation. Therefore, **ConceptFlow unifies personalized image generation**, seamlessly eveloving from single to multi-concept generation.\\n\\nWith superiority in generating interactions between **human and object**, as well as between **human and animal**, ConceptFlow showcases its **potential applications** in areas such as advertising, storytelling, and even garment synthesis.\"}" ] }
Eh1QM3OK51
PIN: Prolate Spheroidal Wave Function-based Implicit Neural Representations
[ "Dhananjaya Jayasundara", "Heng Zhao", "Demetrio Labate", "Vishal M. Patel" ]
Implicit Neural Representations (INRs) provide a continuous mapping between the coordinates of a signal and the corresponding values. As the performance of INRs heavily depends on the choice of nonlinear-activation functions, there has been a significant focus on encoding explicit signals within INRs using diverse activation functions. Despite recent advancements, existing INRs often encounter significant challenges, particularly at fine scales where they often introduce noise-like artifacts over smoother areas compromising the quality of the output. Moreover, they frequently struggle to generalize to unseen coordinates. These drawbacks highlight a critical area for further research and development to enhance the robustness and applicability of INRs across diverse scenarios. To address this challenge, we introduce the Prolate Spheroidal Wave Function-based Implicit Neural Representations (PIN), which exploits the optimal space-frequency domain concentration of Prolate Spheroidal Wave Functions (PSWFs) as the nonlinear mechanism in INRs. Our experimental results reveal that PIN excels not only in representing images and 3D shapes but also significantly outperforms existing methods in various vision tasks that require INR generalization, including image inpainting, novel view synthesis, edge detection, and image denoising.
[ "Prolate Spheroidal Wave Functions", "Implicit Neural Representations", "MLPs" ]
Accept (Poster)
https://openreview.net/pdf?id=Eh1QM3OK51
https://openreview.net/forum?id=Eh1QM3OK51
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wedCwiQQVN", "vshmugEnb9", "qlJMsDwyVw", "qjIjLBWIFj", "q0vCfo3RS0", "jVrE2odAXj", "iqXrssMmA3", "h10xVV1ZQg", "eztI4Rv3b4", "YidIZIU5RA", "XuGJichbLI", "XRsWULY9E9", "TvexFNB7MD", "RgsZIgHz3R", "RFesmAoRQw", "PQKlRsjwnN", "NbkQE0PV1t", "HsdmBpFAlb", "FdwE4pWtOt", "EJvdT9XJ3H", "71h1IBenrh", "5UyAh0RGlQ", "379EHg7LaM", "2zVA0XWVtC" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732553310789, 1732223934746, 1732457659789, 1732224255678, 1732240342233, 1730604948310, 1729592764491, 1732781818550, 1732729240080, 1737523611775, 1730529519675, 1732281281610, 1732491057066, 1732291455929, 1734753701664, 1729129096658, 1732225426039, 1732730064974, 1732781762433, 1732781889776, 1732224788257, 1732429819845, 1732457472720, 1732781676906 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3985/Authors" ], [ "ICLR.cc/2025/Conference/Submission3985/Authors" ], [ "ICLR.cc/2025/Conference/Submission3985/Authors" ], [ "ICLR.cc/2025/Conference/Submission3985/Authors" ], [ "ICLR.cc/2025/Conference/Submission3985/Reviewer_Yext" ], [ "ICLR.cc/2025/Conference/Submission3985/Reviewer_8yGC" ], [ "ICLR.cc/2025/Conference/Submission3985/Reviewer_1n58" ], [ "ICLR.cc/2025/Conference/Submission3985/Authors" ], [ "ICLR.cc/2025/Conference/Submission3985/Area_Chair_YjYe" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3985/Reviewer_Yext" ], [ "ICLR.cc/2025/Conference/Submission3985/Reviewer_1n58" ], [ "ICLR.cc/2025/Conference/Submission3985/Reviewer_BT7N" ], [ "ICLR.cc/2025/Conference/Submission3985/Authors" ], [ "ICLR.cc/2025/Conference/Submission3985/Area_Chair_YjYe" ], [ "ICLR.cc/2025/Conference/Submission3985/Reviewer_BT7N" ], [ "ICLR.cc/2025/Conference/Submission3985/Authors" ], [ "ICLR.cc/2025/Conference/Submission3985/Reviewer_8yGC" ], [ "ICLR.cc/2025/Conference/Submission3985/Authors" ], [ "ICLR.cc/2025/Conference/Submission3985/Authors" ], [ "ICLR.cc/2025/Conference/Submission3985/Authors" ], [ "ICLR.cc/2025/Conference/Submission3985/Reviewer_Yext" ], [ "ICLR.cc/2025/Conference/Submission3985/Authors" ], [ "ICLR.cc/2025/Conference/Submission3985/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer 8yGC,\\n\\nWe hope this message finds you well. As the discussion period is set to conclude tomorrow, and noting that the other reviewers have already responded, we wanted to kindly follow up to check if you have had the chance to review our response. If there are any additional questions or areas requiring clarification, please let us know. We would be happy to provide detailed answers to ensure all your concerns are fully addressed.\\n\\nThank you for your time and consideration.\"}", "{\"comment\": \"First and foremost, the authors would like to thank the reviewer for their thoughtful and insightful questions, which have provided us with an excellent opportunity to clarify and strengthen our work.\", \"w1\": \"We sincerely thank the reviewer for emphasizing the importance of larger-scale evaluations and for recognizing our use of the Kodak Lossless True Color Image Dataset. Evaluating INRs necessitates retraining the model for each image, making large-scale assessments computationally demanding. While many major baselines in INR research evaluate their methods on only two or three examples, we are, to the best of our knowledge, the first to conduct a comprehensive evaluation across the entire Kodak dataset. In addition, we extended our analysis to the DIV2K dataset as suggested, even though with some limitations. Due to time and resource constraints, we evaluated PIN and other state-of-the-art (SOTA) methods on a randomly selected subset of 30 images from DIV2K. The average PSNR values for this subset are presented in the table1 below, and as shown, PIN outperforms other methods on this subset as well. Given that these images were selected randomly, we believe this performance pattern is representative of PIN\\u2019s general superiority and would likely extend to the entire DIV2K dataset. Combining these results, we report the overall average PSNR metrics for 54 images (24 from Kodak and 30 from DIV2K) in table2. We deeply value the reviewer's suggestion regarding larger-scale evaluations and are actively considering this for future work. Specifically, leveraging meta-learning or other training efficiency mechanisms could make such evaluations more feasible. Thank you again for highlighting this aspect and for your constructive feedback. Further, we will incorporate these results to the revised manuscript.\\n\\n**Table 1: PSNR Variation across DIV2K dataset**\\n\\n| **Method** | **PSNR (dB)** |\\n|---------------|---------------|\\n| PIN | 41.46 |\\n| WIRE | 30.12 |\\n| SIREN | 38.77 |\\n| GAUSS | 28.13 |\\n| ReLU+PE | 26.74 |\\n\\n\\n**Table 2: PSNR Variation Across DIV2K and Kodak Datasets**\\n\\n| **Method** | **PSNR (dB) Avg of (DIV2K + KODAK)** |\\n|---------------|-------------------------------------|\\n| PIN | 40.88 |\\n| WIRE | 31.61 |\\n| SIREN | 38.05 |\\n| GAUSS | 27.19 |\\n| ReLU+PE | 27.48 |\", \"q1\": \"We are thankful for the reviewer's question. We computed run-times for the convergence. The following table provides the run-times. As can be seen from the table, PIN's runtime is comparable with that of previous INRs.\\n\\n**Table: Convergence Time Across Methods**\\n\\n| **Method** | **Convergence Time (min)** |\\n|---------------|-----------------------------|\\n| PIN | 6.63 |\\n| WIRE | 11.59 |\\n| SIREN | 6.29 |\\n| GAUSS | 7.47 |\\n| ReLU+PE | 4.05 |\", \"q2\": \"We are thankful for the reviewer regarding the question. When it comes to INRs, they are based on the coordinates of the signal that is being provided to it. So for training the inpainting task, the coordinates corresponding to the inpainted regions are masked out, and during the testing time the entire coordinates of the image is provided. This will effectively asses the method's generalization abilities for unseen coordinates.\"}", "{\"comment\": \"We sincerely believe we have addressed the reviewer's questions, including the additional results and detailed explanations for the approximation method. As the deadline for the discussion period approaches (November 26th), we wanted to gently follow up to see if you've had the opportunity to review our response. If there are any further questions or clarifications needed, please let us know. We would be more than happy to provide detailed answers to ensure all concerns are fully addressed.\"}", "{\"comment\": \"First and foremost, the authors sincerely thank the reviewer for their thoughtful and insightful questions. Furthermore, we greatly appreciate your previous comments, which have been instrumental in refining our work and providing us with an excellent opportunity to further clarify and strengthen it.\", \"w1\": \"The effective training times for the proposed method is as follows. As can be seen from the table, PIN's runtime is comparable with that of previous INRs. \\n\\n**Table: Convergence Time Across Methods**\\n\\n| **Method** | **Convergence Time (min)** |\\n|---------------|-----------------------------|\\n| PIN | 6.63 |\\n| WIRE | 11.59 |\\n| SIREN | 6.29 |\\n| GAUSS | 7.47 |\\n| ReLU+PE | 4.05 |\", \"w2\": \"We thank the reviewer for their thoughtful question and careful review. We would like to clarify that the improved results were not obtained by fine-tuning our proposed method on specific datasets or scenes. Unlike existing baselines, our method uses the same configuration across all experiments, regardless of the data modality.\\n\\nIn the previous submission, we reported results for a 3D dataset, but it did not significantly contrast our results with other INRs. During the rebuttal for the previous submission, we noted that PIN performs well on all 3D datasets except the one used in that submission. Therefore, for this submission, we replaced the previously reported results with those from different 3D datasets to better showcase the performance gap of our method.\", \"w3\": \"We are thankful for the reviewer for the question, and the suggestions. The authors acknowledge that we compared PIN with initial methods. However, to demonstrate the effectiveness of the proposed method, we compared the proposed approach with recently released FINER, INCODE, FR-INR on the entire Kodak image dataset. The following table summarizes the results. Further, the authors will make sure to cite the suggested and latest methods in the revised version of the manuscript.\\n\\n**Table: Comparison of Methods Based on PSNR (dB)**\\n\\n| **Method** | **PSNR (dB)** |\\n|---------------|---------------|\\n| PIN | 40.17 |\\n| INCODE | 34.07 |\\n| FR-INR | 37.91 |\\n| FINER | 35.87 |\"}", "{\"comment\": \"Could you please provide a comparison about the network size in the Tab.2 of your response?\"}", "{\"summary\": \"The paper proposes Prolate Spheroidal Wave Function-based Implicit Neural Representations (PIN), an effective representation inspired by Prolate Spheroidal Wave Functions (PSWFs). The proposed PIN outperforms other INR baselines in various reconstruction tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The proposed PIN outperforms other INR baselines in various reconstruction tasks, including image reconstruction, image inpainting, and occupancy field and NeRF.\", \"Detailed ablation studies are conducted.\"], \"weaknesses\": \"- The metrics provided in the paper are only evaluated on a few individual examples. I appreciate the evaluation on the Kodak Lossless\\nTrue Color Image Dataset. However, it only contains 24 images. It is ideal to perform larger-scale evaluations, e.g. calculating the mean PSNR for a thousand images. For example, one may consider using the DIV2K dataset [1]. \\n\\n[1] Agustsson E, Timofte R. Ntire 2017 challenge on single image super-resolution: Dataset and study[C]//Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2017: 126-135.\", \"questions\": [\"I am interested in the runtime of the proposed PIN. Will the new formulation hurt the speed of INRs? Is there a trade-off between quality and runtime?\", \"How is the experiment conducted for the Image Inpainting task? Are the missing pixels marked as black and fed into the INR? Or are those coordinates masked out and not used during training? Are the pixel masks provided to the INR?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces the use of prolate spheroidal wave functions (PSWF) for implicit neural representation (INR). By employing PSWF as the activation function in INR, the method excels not only in representing images and 3D shapes but also significantly outperforms existing approaches in various vision tasks that rely on INR generalization, including image inpainting, novel view synthesis, edge detection, and image denoising.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Extensive experiments across various vision tasks demonstrate the effectiveness of PSWF.\\n\\n2. A comprehensive theoretical analysis highlights the advantages of using PSWF.\", \"weaknesses\": \"1. When comparing different INR methods, do you ensure that the same parameters are used? Could you provide the specific parameters for each INR method?\\n\\n2. I am curious about the decoding complexity of the PSWF-based INR. Could you provide the decoding speed or time for the different INR methods?\\n\\n3. Besides the vision tasks mentioned in the paper, can PSWF also improve performance in image super-resolution?\", \"questions\": \"I noticed that the authors use initial INR methods as their baselines. However, there are several approaches aimed at enhancing the expressivity and generalizability of INR, including improvements in training strategies [1] and input signals [2]. Could PSWF be applied to these methods to further enhance the representation performance of INR?\\n\\n\\n[1] Improved Implicit Neural Representation with Fourier Reparameterized Training, CVPR 2024\\n\\n[2] Disorder-invariant implicit neural representation, CVPR 2023.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"There are no ethics concerns.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank the reviewer for the thoughtful and insightful questions, and these provided us with an excellent opportunity to clarify and further strengthen our work.\"}", "{\"title\": \"Please take a look at the authors' rebuttal and start a discusssion if needed\", \"comment\": \"Dear Reviewers,\\n\\nThanks for your contributions in reviewing this paper.\\n\\nAs the author-reviewer discussion deadline is approaching, please could you take a look at the authors' rebuttal (if not yet) and see if it addressed your concerns or if you have any further questions. Please feel free to start a discussion.\\n\\nThanks,\\n\\nAC\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The paper proposes a novel Implicit Neural Representation (INR) utilizing Prolate Spheroidal Wave Functions (PSWFs) to improve performance and generalization in computer vision tasks. By leveraging the optimal space-frequency domain concentration of PSWFs, the proposed method addresses the noise artifacts over smoother areas and poor generalization of existing INRs, demonstrating superior results in image inpainting, novel view synthesis, edge detection, and image denoising.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper clearly explains the limitations of current INRs and proposes a novel activation function (PSWFs) for INR to overcome these limitations.\\n2. The localization and expressivity properties of PIN have been theortically proven.\\n3. The results validate the effectiveness of the proposed INR across various tasks. The authors not only show the good representation ability of PIN for various signals, such as image, Occupancy Filed, and NeRF, but also show that PIN has a very good performance on image Image Inpainting.\", \"weaknesses\": \"1. The activation function seems to be rather computationally heavy, making it necessary to report its speed in applications.\\n2. As I have reviewed the previous submission of this paper, I notice that the Fig.4 is replaced with a scene with better results. I wonder how these new results are obtained and why these results are not attached in previous submission? Are they obtained by tuning parameters for each scene specifically?\\n3. Since the first NeurIPS submission of this paper, several new INRs have been proposed, including FINER (and its extension, FINER++) and H-SIREN. However, these new INRs are not cited or compared within the current manuscript.\", \"questions\": \"See Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Many thanks to the author for addressing my concerns. I have no further questions. I keep my current score unchanged.\"}", "{\"comment\": \"Thank you for your response, and I sincerely apologize for the delayed reply. The authors have addressed all my questions, and I would like to maintain my original voting score.\"}", "{\"comment\": \"We are thankful for the reviewer regarding the question. For a fair comparison, we utilized an MLP with 300 hidden neurons and 5 layers for all methods.\"}", "{\"metareview\": [\"In this paper, the authors presented a new activation function for implicit neural representations (INRs) -- Prolate Spheroidal Wave function (PSWF). Motivated by the challenges faced by existing INRs and their struggle to generalise to unseen coordinates, the authors introduced the PSWF-based INRs, termed PIN, leveraging the optimal space-frequency domain concentration of PSWFs. Experimental evaluations over a few different vision tasks (including image inpainting, novel view synthesis, edge detection, and image denoising) show the effectiveness of the proposed PIN. The strengths of this paper include:\", \"The proposed method was well-motivated, with a clear and solid foundation for the approach.\", \"The paper did a good job in analysing and explaining the limitations of existing INRs, which could provide insights for following research in this direction.\", \"The proposed method was backed up with a comprehensive theoretical analysis.\", \"An extensive experimental analysis covering several vision tasks, showing the effectiveness and validity of the proposed method.\"], \"the_weaknesses_of_this_paper_include\": [\"The rationale for employing INRs to address low-level problems (as raised by a reviewer during 2nd phase discussion, details see below). Forward training-based models typically offer better generalisation capabilities.\", \"Potential issues with the infinitely differentiable property of cubic spline that was used for the PSWF in this paper (also confirmed in the 2nd phase).\", \"Insufficient evaluations, computational complexity concern, issues with the results, and missing comparison to recent related works.\", \"Most of the concerns/weaknesses were well addressed in the rebuttal phase and the reviewers also acknowledged that. Overall, this paper presented an interesting idea with insights into INRs, and the AC think this would be of interest to a group of audience in ICLR. As a result, the AC is happy to recommend an Accept, but the authors are **highly suggested to incorporate the further provided evidence and clarifications during the discussions to the final version, and please also merge the Appendix (currently in a separate file), which has some essential analysis and results, to the end of the main paper.**\"], \"additional_comments_on_reviewer_discussion\": \"This paper received review comments from four expert reviewers. During the rebuttal period, there was a heated discussion between the authors and reviewers. with the additionally provided results and evidence by the authors, most of the concerns raised by the reviewers were well addressed, and two reviewers raised their ratings, ending up with 4 borderline Accept. In the AC-reviewers discussion phase, reviewer BT7N further summarised the strengths and weaknesses of this paper, including the concerns about the rationale for applying INRs to low-level problems and the differentiable property of cubic spline. The AC agreed with them, while found them not major issues. However, the authors are suggested to carefully consider these points and add discussions in their final version.\\n\\nAlthough this paper finally received a borderline rating, after carefully checking the paper and the discussions, the AC found this paper could provide insights to the community, and a group of audience in ICLR can benefit from reading it, as a result, worth being presented at ICLR. These led to the final decision of this paper.\"}", "{\"summary\": \"This paper argues that the Gaussian and Gabor wavelet activation functions cannot achieve the optimal space-frequency trade-off, and thus may not effectively capture distant relevance. Motivated by this, the paper proposes using the Legendre polynomial numerical estimation of the Prolate Spheroidal Wave Function (PSWF) as the activation function, which has been proven in previous work to offer the optimal space-frequency trade-off. The authors demonstrate improvements through experiments on various tasks, including image regression, neural radiance fields, and so on.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The motivation presented in the paper is compelling and provides a solid foundation for the proposed approach.\\n2. This paper offers a insightful critique of previous works, identifying limitations in existing activation functions and highlighting the need for improvement. \\n3. The experiments encompass a diverse range of tasks, allowing for a more comprehensive and thorough comparison. This diversity enhances the validity of the findings and demonstrates the effectiveness of the proposed method across different domains.\", \"weaknesses\": \"1. The exact implementation of the numerical estimation of the Prolate Spheroidal Wave Function lacks clarity in the main text. For instance, the approximation order is not explicitly stated, and the impact of the approximation order on the computational complexity (in terms of space/time consumption) and the model's performance is not clearly explained.\\n2. The experiments in tasks, such as Occupancy Field Representation and Neural Radiance Field, appear to be conducted on a subset of the entire dataset, which may undermine the persuasiveness of the results.\\n3. The presentation could be improved by refining certain details, such as employing the vector graphics and employing the \\\\autoref{} command for figure citations to ensure consistency and clarity.\", \"questions\": \"1. Could you clarify the approximation order used for the Prolate Spheroidal Wave Function (PSWF)? Additionally, does this approximation affect the theoretical properties of the PSWF? The paper mentions that PSWF possesses infinite support in space, but it remains unclear whether the finite-order approximation may potentially diminish this property.\\n2. I observed that a bandwidth parameter \\\\(c\\\\) governs the frequency. Could you provide guidance on how to select this parameter effectively? Does it influence model performance, and if so, what strategies should users follow to determine an optimal value, especially for tasks in continuous domains like Neural Radiance Fields (NeRF), where there might be no frequency bounds, unlike the image regression scenario?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"First and foremost, the authors would like to thank the reviewer for their thoughtful and insightful questions, which have provided us with an excellent opportunity to clarify and strengthen our work.\", \"w1\": \"We sincerely thank the reviewer for their insightful question. We acknowledge that the numerical implementation has not been detailed in the main text and will make sure to incorporate this in the revised version. We would also like to gently point out that a more comprehensive description is available in Appendix regarding the approximation method we used. Specifically, we utilized cubic spline approximation, which provides a third-order approximation between two successive points while maintaining continuity and differentiability. A simple regression would often fail to retain the exact data point, and the differentiability properties between discrete points, where differentiability properties are indeed needed during the backpropagation. Therefore, the cubic spline approximation, which is of third-order and computationally efficient, serves as an optimal choice, and does the intended task.\", \"w2\": \"We are thankful for the reviewer regarding his question. The following table summarizes the occupancy field results for different INRs.\\n\\n\\n**Table: IoU for Occupancy Fields**\\n\\n| **Model** | **Asian Dragon** | **Armadillo** | **Happy Buddha** | **Lucy** |\\n|---------------|------------------|---------------|-------------------|-----------|\\n| Siren | 0.95473 | 0.97685 | 0.98155 | 0.96503 |\\n| Wire | 0.93780 | 0.95674 | 0.95618 | 0.99797 |\\n| Gauss | 0.99620 | 0.98919 | 0.99594 | 0.99060 |\\n| ReLU+PE | 0.98362 | 0.99274 | 0.99824 | 0.98211 |\\n| PIN | 0.99837 | 0.99824 | 0.99895 | 0.99917 |\\n\\nThe following table summarizes NeRF results\\n\\n**Table: NeRF for different objects**\\n\\n| **Object** | **PIN** | **WIRE** | **SIREN** | **GAUSS** |\\n|----------------|----------|----------|-----------|-----------|\\n| Chair | 33.590 | 30.328 | 31.610 | 33.233 |\\n| Hotdog | 36.340 | 33.452 | 31.261 | 36.224 |\\n| Mic | 33.150 | 29.060 | 31.725 | 32.761 |\\n| Ship | 28.822 | 25.902 | 27.252 | 28.767 |\\n| Materials | 29.654 | 26.184 | 28.028 | 29.407 |\\n| Ficus | 27.235 | 22.592 | 24.556 | 26.940 |\", \"w3\": \"We are thankful for the suggestion by the reviewer. The authors will make sure to improve the presentation of the paper by utilizing vector graphics and the suggested commands in the revised manuscript.\", \"q1\": \"We utilized a cubic spline approximator, which is order 3. The authors do not believe this would affect any theoretical properties of the PSWFs, as when we get the discretized solution to the governing equation for PSWF, we utilized a cubic spline approximator between every successive points. This cubic spline approximation is necessary because a continuous function, rather than a set of discrete points, is required as an activation function in a neural network. So as far as our knowledge is concerned there is no any potential way of diminishing any theoretical properties. However, if a simpler approximator such as a basic quadratic or cubic polynomial, or even a neural network, were used to approximate the discretized solution, it could potentially compromise some of the properties of PSWFs. These methods do not guarantee passing through the discretized points, which could lead to inaccuracies in differentiation and, consequently, incorrect outcomes during backpropagation.\", \"q2\": \"We are grateful for reviewer regarding this question. As explained in Section 6 of the paper, we utilize an explicit control mechanism, where the frequency is now governed by the parameter \\\\( \\\\omega \\\\). This makes \\\\( \\\\omega \\\\) the frequency-controlling variable instead of \\\\( c \\\\). Now, regarding how to effectively select the parameter \\\\( \\\\omega \\\\), one possible approach is to perform a grid search to identify the values that maximize the results. However, as you mentioned, this often leads to suboptimal outcomes when a different dataset is presented, particularly in NeRF and other applications where strong generalization is crucial. (Incidentally, this is the standard approach adopted by many INR baselines.) Instead of relying on a grid search, we use the most straightforward configuration for \\\\( \\\\omega \\\\), which is \\\\( \\\\omega = 1 \\\\). We initialize all experiments with this value and make \\\\( \\\\omega \\\\) a learnable parameter. Consequently, it gets adjusted dynamically based on the loss function of the intended task. This approach benefits from our spline approximation, which allows the activation function parameters to traverse in the loss landscape more effectively, as explained in the Appendix of the paper\"}", "{\"comment\": \"Dear authors,\\n\\nThanks for the detailed response and the additional evaluations. My concerns are well resolved and I am happy to raise the score.\"}", "{\"comment\": \"We sincerely thank the reviewer for the thoughtful and insightful questions, and these provided us with an excellent opportunity to clarify and further strengthen our work.\"}", "{\"comment\": \"We sincerely thank the reviewer for the thoughtful and insightful questions, and these provided us with an excellent opportunity to clarify and further strengthen our work.\"}", "{\"comment\": \"First and foremost, the authors would like to thank the reviewer for their thoughtful and insightful questions, which have provided us with an excellent opportunity to clarify and strengthen our work.\", \"w1\": \"For all the experiments, we have ensured the optimal parameters of the other methods have been used. However, for the proposed method, every experiment has been conducted with the same parameters unlike others. As can be seen, these baselines do require specific fine-tuning to get the results. However, PIN does not require any of those conditions. \\nThe following table summarizes the activation function parameters utilized for each application.\\n\\n**Table: Configurations for WIRE, SIREN, GAUSS, and PIN Across Experiments**\\n\\n| **Method** | **Configuration** |\\n|------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|\\n| WIRE | Image Representation, Inpainting, Edge Detection ($\\\\omega=20$, $\\\\sigma=10$), Occupancy Field ($\\\\omega=20$, $\\\\sigma=40$), Image Denoising ($\\\\omega=5$, $\\\\sigma=5$), NeRF ($\\\\omega=40$, $\\\\sigma=40$) |\\n| SIREN | $\\\\omega=30$ for all experiments |\\n| GAUSS | $\\\\sigma=30$ for all except NeRF ($\\\\sigma=7.85$) |\\n| PIN | $T=1$, $\\\\omega=1$, $b=0$ for all |\", \"w2\": \"We are thankful for the reviewer regarding the question. The following table summarizes the training speed corresponding to different INRs. As can be seen from this table, PIN's runtime is comparable with that of previous INRs.\\n\\n**Table: Convergence Time Across Methods**\\n\\n| **Method** | **Convergence Time (min)** |\\n|---------------|-----------------------------|\\n| PIN | 6.63 |\\n| WIRE | 11.59 |\\n| SIREN | 6.29 |\\n| GAUSS | 7.47 |\\n| ReLU+PE | 4.05 |\", \"w3\": \"We are thankful to the reviewer for raising this question. We attempted the image super-resolution task on the \\\"Boy\\\" image from the Set14 dataset [ref] and the \\\"Cameraman\\\" image. The following table summarize the results for the \\\"Boy\\\" and \\\"Cameraman\\\" images in 2nd and 3rd columns respectively.\\n\\n**Table: PSNR for Image Super Resolution**\\n\\n| **Method** | **PSNR (dB)** | **PSNR (dB)** |\\n|---------------|---------------|---------------|\\n| PIN | 20.97 | 23.73 |\\n| WIRE | 19.26 | 22.33 |\\n| SIREN | 19.58 | 23.17 |\\n| GAUSS | 20.24 | 22.70 |\\n| ReLU+PE | 18.75 | 21.67 |\\n\\n[ref]. Awesome-Super-Resolution/dataset.md at master \\u00b7559\\nChaofWang/Awesome-Super-Resolution \\u2014 github.com.560\", \"https\": \"/ / github . com / ChaofWang / Awesome -561\\nSuper-Resolution/blob/master/dataset.md.562\\n[Accessed 20-11-2024]\", \"q1\": \"We are thankful for the question, and the suggestions. The authors acknowledge that PIN is compared with initial methods. When considering improvements to [1], we believe there is potential to enhance its methods using PSWFs. However, the core idea of [1] is to employ a fixed Fourier basis and decompose the neural network weight matrix into a product of trainable and fixed Fourier basis matrices. Given this, a key question arises: how can the Fourier basis and PSWFs be effectively combined? A possible approach is to modify [1] by incorporating Fourier basis elements of PSWFs, and using the same weight update rule as in [1].However, without experimental validation, it is difficult to definitively state whether this adaptation would further enhance [1].\\nWhen it comes to enhancing [2], which basically looks the problem in another way, more specifically the input; we firmly believe PSWFs can be incorporated into [2], as their proposal mechanism is based on the input. Detailed experiments are indeed needed to verify the claim. To further demonstrate the effectiveness of the proposed method, we compared the proposed approach with recently released FINER, INCODE, FR-INR on the entire Kodak image dataset. The following table summarizes the results. The suggested references, new methods, and additional results will be included in the revised version of the paper.\\n\\n**Table: Comparison of Methods Based on PSNR (dB)**\\n\\n| **Method** | **PSNR (dB)** |\\n|---------------|---------------|\\n| PIN | 40.17 |\\n| INCODE | 34.07 |\\n| FR-INR | 37.91 |\\n| FINER | 35.87 |\"}", "{\"comment\": \"Thank you. All of my concerns have been addressed. I reserve my original voting score.\"}", "{\"comment\": \"We sincerely believe we have addressed the reviewer's questions, including a thorough evaluation on the DIV2K dataset and detailed explanations for the inpainting task. As the deadline for the discussion period approaches (November 26th), we wanted to gently follow up to see if you've had the opportunity to review our response. If there are any further questions or clarifications needed, please let us know. We would be more than happy to provide detailed answers to ensure all concerns are fully addressed.\"}", "{\"comment\": \"We sincerely thank the reviewer for the thoughtful and insightful questions, and these provided us with an excellent opportunity to clarify and further strengthen our work.\"}" ] }
Egd7Vi1EuA
Towards Secure Tuning: Mitigating Security Risks Arising from Benign Instruction Fine-Tuning
[ "Yanrui Du", "Sendong Zhao", "Jiawei Cao", "Ming Ma", "Danyang Zhao", "FENGLEI FAN", "Ting Liu", "Bing Qin" ]
Instruction Fine-Tuning (IFT) has become an essential method for adapting base Large Language Models (LLMs) into variants for professional and private use. However, researchers have raised concerns over a significant decrease in LLMs' security following IFT, even when the IFT process involves entirely benign instructions (termed Benign IFT). Our study represents a pioneering effort to mitigate the security risks arising from Benign IFT. Specifically, we conduct a Module Robustness Analysis, aiming to investigate how LLMs' internal modules contribute to their security. Based on our analysis, we propose a novel IFT strategy, called the Modular Layer-wise Learning Rate (ML-LR) strategy. In our analysis, we implement a simple security feature classifier that serves as a proxy to measure the robustness of modules (e.g. $Q$/$K$/$V$, etc.). Our findings reveal that the module robustness shows clear patterns, varying regularly with the module type and the layer depth. Leveraging these insights, we develop a proxy-guided search algorithm to identify a robust subset of modules, termed $Mods_{Robust}$. During IFT, the ML-LR strategy employs differentiated learning rates for $Mods_{Robust}$ and the rest modules. Our experimental results show that in security assessments, the application of our ML-LR strategy significantly mitigates the rise in harmfulness of LLMs following Benign IFT. Notably, our ML-LR strategy has little impact on the usability or expertise of LLMs following Benign IFT. Furthermore, we have conducted comprehensive analyses to verify the soundness and flexibility of our ML-LR strategy.
[ "Large Language Models", "Security", "Instruction Fine-Tuning" ]
https://openreview.net/pdf?id=Egd7Vi1EuA
https://openreview.net/forum?id=Egd7Vi1EuA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xQy4NruAke", "haj6N8TA76", "VSjYXD5ygK", "RCzxdlH8Em", "JGId1NihYI" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730097043645, 1730309771764, 1734106154216, 1730584788134, 1730506148550 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10885/Reviewer_A3vn" ], [ "ICLR.cc/2025/Conference/Submission10885/Reviewer_RoZJ" ], [ "ICLR.cc/2025/Conference/Submission10885/Authors" ], [ "ICLR.cc/2025/Conference/Submission10885/Reviewer_AxNq" ], [ "ICLR.cc/2025/Conference/Submission10885/Reviewer_h9Pj" ] ], "structured_content_str": [ "{\"summary\": \"This paper focues on preserving the safety alignment of LLMs when performing instruction tuning on benign dataset. The authors propose a solution that involves training a safety feature classifier, finding the safety-robust regions in LLMs, and fine-tuning with different learning rate. Experiments show the effectiveness of proposed method compared to vanilla instruction tuning.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The focued phenomenon is worth-exploring.\", \"The idea is strightforward and ituitively feasible.\", \"Experiments show the effectiveness of the proposed method in preserving safety alignment compared to vanilla instruction tuning.\"], \"weaknesses\": [\"Missing references. [1] also shows benign instruction tuning can compromise the safety alignment of LLMs. [2] also verifies safety-critical LLM parameter regions.\", \"Missing key baselines. [2] also searches for parameters that are critical to safety alignment, which is a key baseline for this paper. The authors should explain the advantages of their methods compared to [2] and also verify this in the experiments.\", \"The experiment in Table 1 is not convincing. Transforming data from AdvBench by substituting several words makes the the classifer easy to train. Then, testing on in-domain AdvBench-like data to achieve 100% accuracy is straightforward, which cannot prove too much.\", \"It is unclear why the authors apply such perturbations in LIne 205-209. What are the motivations behind these?\", \"[1] Yi, Jingwei, et al. \\\"On the vulnerability of safety alignment in open-access llms.\\\" Findings of the Association for Computational Linguistics ACL 2024. 2024.\", \"[2] Wei, Boyi, et al. \\\"Assessing the brittleness of safety alignment via pruning and low-rank modifications.\\\" arXiv preprint arXiv:2402.05162 (2024).\"], \"questions\": \"no\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper aims to mitigate the issue that large language models' (LLM) security decreases after instruction Fine-Tuning (IFT) on benign data. The authors first train a feature classifier (proxy) that discerns benign/malicious prompts at LLM's last layer. Then, they perturb different modules in the LLM to see their impact on proxy's performance on malicious prompts. The authors find some interesting patterns during the process. Based on the impact, the authors assign a large learning rate to those robust modules (named $Mods_{Robust}$) and a small learning rate to those sensitive modules during IFT on benign data. The authors name this IFT strategy as Modular Layer-wise Learning Rate (*ML-LR*) strategy. The experimental results show *ML-LR* can properly mitigate the security decrement while keeping helpfulness on two IFT scenarios (general and math-specified) over different LLMs (eg., Llama series and Vicuna series).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The motivation and insight of thie paper are sound. The way to detect the subset of security-robust modules is straightforward, and scanning all modules for different LLMs seems a big project.\\n2. The authors detect interesting patterns on security-robust modules, which can be useful for the community's future research.\\n3. Experimental results show *ML-LR*'s appreciate security maintance performance on benign data IFT.\\n4. This paper is well-structured and easy to read to me.\", \"weaknesses\": \"1. The proxy used to discern $Mods_{Robust}$ can be further upgraded. Sometimes the LLM can discern benign/malicious prompts, but fails to correctly react to it (e.g., refuse or not refuse)[1] (discussion on hypothesis 1 in the [1]). From this viewpoint, just discerning benign/malicious prompts without considering model's response may lead to insufficient performance when finding $Mods_{Robust}$. But it's fine to leave for future research.\\n2. The authors use AdvBench to train proxy, and also use it (though they may not overlap) as one of benchmarks when evaluate the IFT performance. It is better to do security evaluation on other safety-related datasets with different distributions.\\n3. Some figures and tables can be further polished. For example, I think the resolution of Figure 5 is a low with some words (e.g, #1->#7) seeming a little blurred; Figure. 3(b)'s layout can be further adjusted to match 3(a); And some highlights (bold or other markers) can be added on some table results (For example, Table 2. It is better to let readers to see the advantage of *ML-LR* using some markers).\\n\\n[1] \\\"On Prompt-Driven Safeguarding for Large Language Models\\\" ICML 2024, Zheng et. al.\", \"questions\": \"1. In practice, more data can be used to IFT a LLM, this may lead to more training iterations. Under this case, even the learning rate of those sensitive modules is small, they may finally get updated to impact the model's security performance. Do you think this issue will take place in practice? And when you do the experiment, does this issue take place when the *ML-LR* is trained for more epcohs?\\n2. I notice in *Line 180*, you mention you make analaysis on 4 LLMs, and in the experiment, you use 5 LLMs (with a new Llama3.1-8b). Why analysis on Llama3.1-8b is skipped?\\n3. Do you ever test *ML-LR*'s performance on some dataset with a few malicious data? It might be interesting to see its performance in this case. Never mind if you have no time to do the experiment.\\n4. How do you think *ML-LR*'s advantage compared with those strategy that only do inference-stage intervene on LLM's modules (e.g., *RePE*[2] or training a safety suffix/prefix)?\\n5. See weeknesses.\\n\\n[2] \\\"Representation Engineering: A Top-down Approach to Ai Transparency\\\" Andy Zou et. al.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The authors propose fine-tuning certain components of an LLM with a lower learning rate to increase model robustness to adversarial attacks. They focus on the setting of benign instruction tuning, where the instructions themselves are not harmless.\\n\\nThey motivate this approach through a preliminary analysis of the effects on model safety of perturbing different model components (e.g. Q/K/V matrices) at different layers. They do this by training a harmful prompt classifier on the final embeddings of the LLM, and analyzing the drop in accuracy of this classifier when a given component is perturbed (although, as far as I can tell, they do not say how exactly they perturb the components). \\n\\nThe authors find that (1) components in earlier layers are more sensitive (in the sense described above) than ones in later layers, (2) Q and K matrices are more sensitive than other kinds of components, and (3) combinations of robust (i.e. non-sensitive) components can be sensitive (i.e. perturbing all components at the same time has a big impact in the classifier\\u2019s accuracy). \\n\\nBased on these three findings, they propose a heuristic search routine for identifying sensitive modules inside the LLM. They then propose decreasing the learning rate of these modules by a large factor (on the order of $10^{-4}$) relative to the remaining components during fine-tuning, in effort to increase robustness to adversarial attacks. This method is referred to as Modular Layer-wise Learning Rate (ML-LR).\\n\\nThey then evaluate this method\\u2019s effect on general model capabilities, domain-specific capabilities, and adversarial robustness. They work with Llama 2, Llama 3 and Vicuna models, and use LoRA fine-tuning. They find that regular instruction fine-tuning (IFT) and ML-LR on general domain data produce similar results on capabilities evaluations, but ML-LR leads to lower attack success rates on many adversarial attack benchmarks. They conduct a similar study when fine-tuning on a mathematics task, and also find that the resulting capabilities are similar, but the susceptibility to attacks is lower (roughly half the attack success rate), for ML-LR compared to IFT.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. Safety evaluations include several benchmarks (namely six), and are represented in a visually intuitive way using radar charts.\\n\\n1. The authors use several open-weights models in their evaluations, indicating their results are not model-specific.\", \"weaknesses\": \"1. Module robustness analysis uses questionable scientific methodology: the authors pre-train a harmfulness classifier on the final embeddings of the LLM, and proceed to claim that, if a perturbation on a model component decreases the accuracy of this *same* pre-trained classifier, then this corresponds to the model safety being sensitive to this module. However, perturbing one model component can simply produce a distribution shift in the final embeddings, and need not influence top-level model behavior.\\n\\n1. In fact, the three \\u201cpatterns\\u201d identified by the authors on their perturbation studies could just as well be explained through the lens of distribution shift:\\n\\n - Pattern A: perturbations in earlier layers can get amplified throughout the forward pass, causing a larger distribution shift, and causing classifier accuracy to drop more. This is similar to how, in a physical system, perturbing the initial conditions can lead to larger discrepancies in the final state, as the perturbation can get amplified by system dynamics through time.\\n\\n - Pattern B: perturbations in Q and K modules impact the attention coefficients, and in particular are passed through a softmax function (as the coefficients are given by $a_{ij} = softmax(x_i^T W_Q^T W_K x_j / \\\\sqrt{d})$), which can significantly alter the attention patterns of the model, leading to greater distribution shift.\\n\\n - Pattern C: perturbing several model components and composing them can amplify each perturbation (e.g. multiplicatively), leading to greater distribution shift.\\n\\n1. Hence, I am not convinced by the author\\u2019s claim that their robustness analysis says anything about the \\u201csecurity\\u201d of individual model components. Crucial to my perception is the fact that they don\\u2019t retrain the harmfulness classifier after the perturbations, and simply use it as an absolute measure of model safety.\\n\\n1. As highlighted above, the authors (to the best of my understanding) do not explain how they perturb components of the model in section 3.2, rendering the section unreproducible, in addition to arguably fundamentally scientifically flawed as argued above. Since this section is a key part of the paper, I consider that this constitutes enough reason for this paper to be rejected.\\n\\n1. Presentation: \\n\\n - the formatting of the tables in the paper does not conform to usual standards of conference publications, e.g. using vertical bars.\\n\\n - In certain tables, e.g. Table 3, the names of models (e.g. Llama2-7B) are placed very strangely among the numerical results. In the case of Table 3, they are also incorrectly formatted (e.g. Llama213B rather than Llama2-7B or other reasonable formatting choices).\", \"questions\": \"1. What motivates your choice of a linear network architecture (followed by a sigmoid activation) in equation (1)? Did you observe a big difference compared to using regular logistic regression?\\n\\n1. The decrease in learning rate of \\u201csensitive components\\u201d is very significant, being as large as $10^{-4}$ in some cases. One important ablation for this method would be to set the learning rate of these modules to zero, since they are already much lower than the rest. Have the authors evaluated this alternative?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces ML-LR, a novel IFT strategy. It first categorizes different modules based on robustness and then improves model robustness by reducing the learning rates of modules more sensitive to malicious instructions. Experimental results demonstrate that ML-LR is a more robust fine-tuning method compared to IFT.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed ML-LR method is novel.\\n\\n2. The paper\\u2019s narrative is clear and logically structured, first identifying three module patterns through evaluation and then designing the IFT strategy based on these patterns.\", \"weaknesses\": \"1. Some results in the paper are questionable and require further explanation and correction. For example, in Table 2, why does the quality of the model\\u2019s generated responses decrease after IFT?\\n\\n2. The paper only uses the LLaMA series and LLaMA-based Vicuna models. Evaluations on a broader range of models with different architectures may need to be conducted.\\n\\n3. Fig. 6 shows that ML-LR is less effective than IFT against optimization-based attacks (e.g., GCG). Given that GCG is a state-of-the-art optimization-based attack, the paper should explain why this outcome occurs.\", \"questions\": \"1. My biggest concern is, the experimental results are not realistic. In Table 2, the quality of responses generated by BASE is even better than that of IFT and ML-LR. If that\\u2019s the case, why do we use fine-tuning? The model obtained through fine-tuning not only has reduced usability but also decreased robustness. This might imply that the quality of the fine-tuning training data is poor. The paper should at least ensure that higher quality recovery can be achieved after IFT or ML-LR.\\n\\n2. The evaluation is somewhat lacking. For the general-domain scenario, the paper only tested the LLaMA series models, and for the specific-domain scenario, it only added the Vicuna series models, which are also based on LLaMA. Since the paper achieves fine-tuning by adjusting the learning rate of different parameters in the model, it should test a wider variety of LLM architectures, such as Mistral and Gemma.\\n\\n3. The experimental results are also not very promising. Specifically, in Fig. 6, when the training loss of the two methods is similar, ML-LR is sometimes less effective against GCG (the state-of-the-art jailbreaking attack) compared to IFT.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
EgP6IEyfYJ
WATERMARKING GRAPH NEURAL NETWORKS VIA EXPLANATIONS FOR OWNERSHIP PROTECTION
[ "Jane Downer", "Ren Wang", "Binghui Wang" ]
Graph Neural Networks (GNNs) are the mainstream method to learn pervasive graph data and are widely deployed in industry, making their intellectual property valuable. However, protecting GNNs from unauthorized use remains a challenge. Watermarking, which embeds ownership information into a model, is a potential solution. However, existing watermarking methods have two key limitations: First, almost all of them focus on non-graph data, with watermarking GNNs for complex graph data largely unexplored. Second, the de facto backdoor-based watermarking methods pollute training data and induce ownership ambiguity through intentional misclassification. Our explanation-based watermarking inherits the strengths of backdoor-based methods (e.g., robust to watermark removal attacks), but avoids data pollution and eliminates intentional misclassification. In particular, our method learns to embed the watermark in GNN explanations such that this unique watermark is statistically distinct from other potential solutions, and ownership claims must show statistical significance to be verified. We theoretically prove that, even with full knowledge of our method, locating the watermark is an NP-hard problem. Empirically, our method manifests robustness to removal attacks like fine-tuning and pruning. By addressing these challenges, our approach marks a significant advancement in protecting GNN intellectual property.
[ "Watermarking", "GNNs", "Ownership", "Verification", "Ownership Verification", "Explanation", "Graph" ]
Reject
https://openreview.net/pdf?id=EgP6IEyfYJ
https://openreview.net/forum?id=EgP6IEyfYJ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xKm4xQ1wGz", "wAcNj1yw7c", "uQ4VPTxSmz", "gPtx7tHU5g", "gJnDui1Lnq", "g4LUvIqybk", "e68ZEDOx7f", "bMkw1751R0", "b29mct5IAn", "TbwrkeMcAs", "SrIpYPkcjz", "RxeqXUPWuy", "MimmHHiDSD", "IcExGuQjnz", "BbMclhFWk7", "7lwdbHy8MB", "1JxHeCjsRt", "17MH7yPbPw", "0NGsZvcHbg", "0DYfhZ8NRY" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732262452532, 1730756182429, 1731877346208, 1731050753355, 1732262372474, 1731877606839, 1731460318913, 1732570115190, 1737523525973, 1732263454854, 1732263101073, 1732568203505, 1732702128455, 1731078756007, 1734603958810, 1730655701345, 1732167897128, 1732262844284, 1732483017549, 1732601314359 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2714/Authors" ], [ "ICLR.cc/2025/Conference/Submission2714/Reviewer_7pbz" ], [ "ICLR.cc/2025/Conference/Submission2714/Authors" ], [ "ICLR.cc/2025/Conference/Submission2714/Reviewer_k1GL" ], [ "ICLR.cc/2025/Conference/Submission2714/Authors" ], [ "ICLR.cc/2025/Conference/Submission2714/Authors" ], [ "~Yiming_Li1" ], [ "ICLR.cc/2025/Conference/Submission2714/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2714/Authors" ], [ "ICLR.cc/2025/Conference/Submission2714/Authors" ], [ "ICLR.cc/2025/Conference/Submission2714/Authors" ], [ "ICLR.cc/2025/Conference/Submission2714/Authors" ], [ "ICLR.cc/2025/Conference/Submission2714/Reviewer_ub6g" ], [ "ICLR.cc/2025/Conference/Submission2714/Area_Chair_HF3v" ], [ "ICLR.cc/2025/Conference/Submission2714/Reviewer_bM1q" ], [ "~Yiming_Li1" ], [ "ICLR.cc/2025/Conference/Submission2714/Authors" ], [ "ICLR.cc/2025/Conference/Submission2714/Area_Chair_HF3v" ], [ "ICLR.cc/2025/Conference/Submission2714/Reviewer_k1GL" ] ], "structured_content_str": [ "{\"title\": \"References\", \"comment\": \"[1] Zhao et al. \\u201cWatermarking Graph Neural Networks by Random Graphs.\\u201d ISDFS, 2020\\n\\n[2] Xu et al. \\u201cWatermarking Graph Neural Networks based on Backdoor Attacks.\\u201d EuroS&P 2023\\n\\n[3] Bansal, Arpit, et al. \\u201cCertified Neural Network Watermarks with Randomized Smoothing.\\u201d ICML 2022 \\n\\n[4] M\\u00fcller, Luis, Mikhail Galkin, Christopher Morris, and Ladislav Ramp\\u00e1\\u0161ek. \\u201cAttending to Graph Transformers.\\u201d TMLR, 2024.\"}", "{\"summary\": \"This paper aims to inject watermarks to graph neural networks, in order to serve as a verification strategy to protect the ownership of the graph neural network. In order to achieve the goal, the authors investigate a new methodology of graph model explanation.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper studies a novel problem for GNN ownership verification.\\n2. The proposed method is interesting and the empirical results demonstrate the effectiveness. \\n3. The authors conduct serious robustness evaluation of the proposed methodology.\", \"weaknesses\": \"**Concerns Regarding the Presentation**\\\\\\nFrom Section 3, the paper uses too many math notations and equations, but didn't provide sufficient explanation and clarification. This make the paper not very easy to follow. For example, the GNN explanation part in Section 3.2, it is hard to see how someone can use it to explain the GNN prediction. Moreover, the switch from Lasso to Ridge Regression will make the give weights to all features. I am not sure how to find important features based on the Ridge Regression output. \\n\\n**Concerns Regarding the Threat Model**\\\\\\nThe threat model discussed in this paper appears to lack sufficient justification. Specifically:\\n1. It is unclear why or under what circumstances \\u201cthe adversary does not know the location of the watermarked subgraphs, but knows the shape and number of the subgraphs.\\u201d In practice, if the adversary is not the model trainer, they may have limited knowledge about the watermark.\\n2. The authors also mention that the model ownership verifier cannot access the protected model. It\\u2019s not immediately clear why the verifier would be unable to access the model parameters. Try to imagine, if an artist who owns a painting work sues another party for copyright infringement, it would be unusual for the artist to prevent the judges from examining their original artworks.\", \"questions\": \"Plz see the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Comparison of Our Method to EaaW\", \"comment\": \"# Comparison to EaaW\\nThank you for the review and opportunity to address your concerns regarding our work\\u2019s relation to \\u201cExplanation as a Watermark\\u201d (EaaW) [1]. We address each of your key points below, outlining our contributions, differences, and similarities.\\n\\n### **Contributions**\\n\\n- ***Watermarking in the graph domain.*** Our work fills a gap in the literature by extending explanation-based watermarking to the graph domain, requiring non-trivial adaptations for explanation generation and watermark alignment.\\n\\n- ***Verification without reliance on ground truth.*** Our method avoids requiring a third party to know the ground-truth watermark, enhancing security by eliminating the need to share private information.\\n\\n- ***Theoretical hardness of watermark identification.*** We prove that locating our watermarks is NP-hard, providing a formal guarantee of robustness.\\n\\n### **Differences**\\n\\n- ***Domain-specific focus.*** Our work focuses on the graph domain, addressing structural relationships, multi-hop dependencies, and graph connectivity\\u2014challenges absent in the standard DNNs used in EaaW.\\n\\n- ***Beyond LIME-Based Sampling.*** Unlike GraphLIME [2], LIME [3], and EaaW, we do not rely on local sampling around individual inputs. Instead, our target subgraphs consist of multiple disconnected nodes sampled from across the node classification network, reducing overlap with adversary-selected subgraphs. Additionally, we watermark only a selected subset of node features rather than the full feature signal, enhancing robustness and minimizing watermark size. To clarify, GraphLIME\\u2019s primary influence on our work lies in its use of Gaussian kernel matrices for regression, as described in Section 3.2.\\n\\n - ***Regression input.*** EaaW\\u2019s regression inputs are binary masks isolating specific feature subsets from a single trigger sample, enabling targeted attribution analysis. In contrast, our regression takes a matrix of node features aggregated across disconnected nodes, capturing broader feature relationships in the graph domain.\\n\\n- ***Disjoint training sets.*** EaaW optimizes for classification performance on a training set that includes the trigger sample. In contrast, our method uses disjoint subsets for classification and watermarking optimization; we found this separation was necessary for loss convergence.\\n\\n- ***Verification method.*** Our method does not compare adversarial samples to a ground-truth trigger, instead requiring target subgraphs with maximally similar explanations.\\n\\n\\n### **Similarities**\\n\\n- ***Motive and high-level insight.*** \\nBoth works aim to address the limitations of backdoor-based watermarking by watermarking explanations. This does not undermine our originality, as the challenges of backdoor mechanisms are well-documented [4,5,6], providing a natural and necessary starting point to consider other embedding spaces, like explanations.\\n\\n- ***High-level embedding approach.*** Both EaaW and our work use ridge regression and a hinge-like loss function to embed watermarks. We acknowledge this similarity and will cite EaaW in section 4.1 of our paper. However, our methodology diverges in domain, sampling, and verification.\\n\\n---\\n\\nWe acknowledge that EaaW should have been cited. This omission was not intentional. Given the similar embedding approach, deliberately omitting citation would have been counterproductive; it was a mistake made during a time-constrained submission process, and we regret this oversight. While our embedding approaches overlap, our contributions lie in domain-specific adaptations, novel applications, and verification strategies. These distinctions will be clarified further in our revisions.\\n\\n---\\n\\n### References\\n\\n[1] Shao, Shuo, Yiming Li, Hongwei Yao, Yiling He, Zhan Qin, and Kui Ren. \\u201cExplanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Watermarking Feature Attribution.\\u201d Proceedings of the Network and Distributed System Security Symposium (NDSS), 2025.\\n\\n[2] Huang, Q. et al. \\u201cGraphLIME: Local Interpretable Model Explanations for Graph Neural Networks.\\u201d IEEE Transactions on Knowledge and Data Engineering 35 (2020): 6968-6972.\\n\\n[3] Ribeiro, Marco Tulio et al. \\u201c\\u201cWhy Should I Trust You?\\u201d: Explaining the Predictions of Any Classifier.\\u201d Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016): n. pag.\\n\\n[4] Gu, Tianyu et al. \\u201cBadNets: Evaluating Backdooring Attacks on Deep Neural Networks.\\u201d IEEE Access 7 (2019): 47230-47244.\\n\\n[5] Liu, Jian, Rui Zhang, et al. \\u201cFalse Claims against Model Ownership Resolution.\\u201d Proceedings of the USENIX Security Symposium, 13 Apr. 2023.\\n\\n[6] Yan, Yifan, et al. \\\"Rethinking {White-Box} Watermarks on Deep Learning Models under Neural Structural Obfuscation.\\\" *Proceedings of the 32nd USENIX Security Symposium (USENIX Security 23)*, 2023, pp. 2347\\u20132364.\"}", "{\"summary\": \"This paper proposes a graph explanation-based watermarking method. The watermarks are set as specific explanations and are trained into the protected model by backdoor techniques. During verification, explanations of watermarked data will be compared with the ground-truth explanations to determine the ownership. In empirical evaluations, authors demonstrate the performance of the proposed watermarking method from different views, e.g., effectiveness, uniqueness, robustness, and undetectability.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"It is novel to utilize explanation as 'poisoned labels' of watermarked data, which will not negatively influence the performance of GNNs.\", \"weaknesses\": \"The major concern is how verifiers could get the explanations via black-box access to the protected model during the black-box verification. If I read carefully enough, the explanation of GraphLIME can only be obtained under white-box settings with node embeddings or other model parameters. How is the query efficient if they can be obtained via black-box access? I welcome discussion from authors and will consider raising the score if the answer is appropriate.\", \"questions\": \"Please refer to weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your comments! We address each of your concerns below.\\n\\n# **Tight threat model.** \\n\\nWe\\u2019d like to clarify the capabilities of each party in our threat model.\\n\\n- ***Adversaries:*** We assume adversaries have black-box access to the model and no knowledge of the watermark. Even in cases where adversaries gain white-box access (e.g., for fine-tuning or pruning), locating the watermark remains computationally intractable without precise knowledge of its location.\\n\\n- ***Verifier:*** The verifier only requires black-box access to the model for verification, making the approach broadly applicable and robust.\\n\\n- ***Model Owner:*** The model owner has white-box access to the training process and the data. This assumption is realistic and not unique to our method; many backdoor-based watermarking approaches also rely on access to training data for embedding or verification purposes [1-3].\\n\\nOur method minimizes reliance on data pollution and avoids ambiguity, addressing the limitations of backdoor-based watermarking methods while leveraging access commonly assumed in this domain. We believe this combination of practicality and robustness makes our approach suitable for a wide range of scenarios, even if universal applicability is not guaranteed.\\n\\n# **Fine-tuning with a larger learning rate.** \\n\\nOur default choice for fine-tuning learning rate was 0.1x the learning rate on the original task. To address your concern, we have conducted additional fine-tuning experiments with scaled learning rates (1x and 10x) \\u2013 see Figure 12 in the appendix of our revised paper.\\n\\nWe find that larger learning rates accelerate the rise in the MI p-value, indicating faster degradation of the watermark. For instance, at 1x, the p-value increases sharply within the first 10 epochs. However, this comes at a significant cost: by the time fine-tuning accuracy (on the new dataset) increases to a reasonable level, accuracy on the *original* training set drops substantially, making the attack impractical as the model\\u2019s utility on the original task is severely compromised.\\n\\nAdditionally, the fine-tuning results in the original submission were inadvertently obtained by fine-tuning on the original training dataset. In the updated experiments, we fine-tuned on a separate validation set to better reflect realistic attack scenarios. This adjustment introduced some differences\\u2014most notably, a decline in original training accuracy\\u2014but, as previously noted, this strengthens our position by showing that fine-tuning attacks not only affect the watermark but also significantly degrade the model\\u2019s utility, reducing the attack\\u2019s stealth. Despite these changes, the p-value trends remain consistent: for Photo and PubMed datasets, the p-value stays roughly below 0.1 throughout fine-tuning, and for CS, it only rises above 0.1 after prolonged fine-tuning. These findings reaffirm the robustness of our method while highlighting the trade-offs faced by an attacker.\\n\\n\\n# **Other graph tasks.**\\n\\nWhile our results focus on node classification, our method can be extended to other graph learning tasks, such as edge classification and graph classification. The key is to obtain $n \\\\times F$ features matrices that we can derive explanations from. For node classification, each of our $T$ target subgraphs yields a $n \\\\times F$ matrix of node features. For edge classification, we can generate similar matrices for edge features. For graph classification, we can accomplish something similar by having each row in a $n \\\\times F$ matrix represent the features of an individual graph; rather than have one such matrix for each of $T$ target subgraphs, we instead have one matrix per *collection* of subgraphs. \\n\\nPlease see further details of this extended logic in Appendix Section A.5 in our revisions.\\n\\n# **Other models.**\\n\\nGraph Neural Networks are the more popular choice for graph learning tasks and a natural starting point for node classification. In contrast, Graph Transformers introduce significant computational complexity due to their reliance on global attention mechanisms [4], adding unnecessary overhead to the multi-part optimization task. While extending our method to Graph Transformers is an interesting direction for future work, it falls outside the current scope of our focus.\\n\\n# **Accuracy without watermarking.** \\n\\nWe have updated the paper (Table 1) to include accuracy rates in the absence of watermarking. For your convenience, we list them here. Note that accuracy rates are reasonable both with and without watermarking.\\n\\n|**Accuracy (Train/Test)**|||||||\\n|-|-|-|-|-|-|-|\\n||***GCN***||***SGC***||***SAGE***|\\n|**Dataset**|No Watermark|Watermark|No Watermark|Watermark|No Watermark|Watermark|\\n|***Photo***|91.3/89.4|90.9/88.3|91.4/89.9|90.1/88.0|94.2/90.8|94.1/88.2|\\n|***PubMed***|88.6/85.8|85.7/81.4|88.8/85.9|85.3/81.4|90.5/86.0|91.1/81.2|\\n|***CS***|98.5/90.3|96.8/89.8|98.4/90.3|96.7/90.1|100.0/88.4|99.9/88.9|\", \"title\": \"Response to Reviewer ub6g\"}", "{\"title\": \"Suggested Improvements\", \"comment\": \"# Suggested Improvements\\nTo address concerns about normality, we applied the Shapiro-Wilk test [1] to our Matching Indices (MI) distributions. Below are the average p-values from 5 trials for three GNN architectures. These results fail to reject the null hypothesis of normality.\\n\\n|Shapiro-Wilk Test p-values||||\\n|-|-|-|-|\\n|Dataset|SAGE|SGC|GCN|\\n|Photo|0.324|0.256|0.345|\\n|CS|0.249|0.240|0.205|\\n|PubMed|0.249|0.227|0.265|\\n\\n### **Baselines and Robustness Experiments.** \\nWe aim to include these results as part of our broader revisions alongside other responses.\\n\\n---\\n\\n### References\\n\\n[1] Ghasemi, Asghar and Saleh Zahediasl. \\u201cNormality Tests for Statistical Analysis: A Guide for Non-Statisticians.\\u201d International Journal of Endocrinology and Metabolism 10 (2012): 486 - 489.\"}", "{\"title\": \"Alert for Potential Plagiarism: Highly similar to our work without proper reference and discussions\", \"comment\": \"**Summary**: This paper proposed to watermark the explanations of Graph Neural Networks (GNN) to protect the copyright of GNNs. Specifically, this paper utilized GraphLIME to generate the explanations and embed the watermark into them via a dual-objective loss function. This paper further provided a theoretical analysis of the difficulty of locating the watermark.\\n\\n\\n**Soundness**: fair\\n\\n**Presentation**: fair\\n\\n**Contribution**: Poor\\n\\n\\n\\n**Strengths**\\n\\n1. The studied problem is of great significance and sufficient interest to ICLR audiences.\\n2. The main idea is easy to follow to a large extent.\\n3. The authors try to provide theoretical analyses of their method, which should be encouraged.\\n\\n\\n\\n**Weaknesses**\\n\\n1. This paper is very similar to [1] (initially submitted to S&P 2024 on December 7, 2023 and released on arXiv in May 8, 2024, link: https://arxiv.org/abs/2405.04825) in various aspects. It appears to be a straightforward implementation of [1] on Graph Neural Networks. However, it seems that the authors intentionally omit the reference to [1] and exaggerate their contributions by not citing [1]. The authors should provide a detailed discussion of the differences and originality of this paper compared with [1]. Otherwise, this paper might be regarded as plagiarism. The resemblances are outlined below.\\n 1. These two papers share **the same motivation** that existing backdoor-based watermarking methods are harmful and ambiguous.\\n 2. These two papers share **the same insight** to embed the watermark into the explanation of the neural network.\\n 3. These two papers share **nearly the same method** to embed the watermark. [1] proposed to utilize a LIME-based method, while this paper leverages GraphLIME (which is an extension of LIME in GNN).\\n 4. Both papers utilize **ridge regression** to calculate the explanations.\\n 5. Both papers use **hypothesis-based methods** for ownership verification. [1] utilized the chi2-test while this paper uses the z-test.\\n2. This paper uses Z-test for ownership verification and the follow-up analyses, based on matching indices. However, Z-test can be used only when the variable follows the Gaussian distribution (https://en.wikipedia.org/wiki/Z-test). However, to the best of our knowledge, the matching indice (MI) follows a multinomial distribution instead of the Gaussian distribution. In this case, it is not appropriate to use the Z test. Even if the author tries to claim that the distribution of MI approaches the Gaussian distribution under the central limit theorem, the author needs to verify its correctness through the Normality Test.\\n3. Missing some important baselines [2, 3]. This paper also lacks an empirical comparison with these baselines.\\n4. The experiments on the robustness are inadequate. There is no discussion about the resistance to potential adaptive attacks. For example, the adversaries can design a strong adaptive attack by including the proposed method as a regularization and conducting overwirte attacks.\", \"references\": \"1. Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Watermarking Feature Attribution. NDSS, 2025.\\n2. Revisiting Black-box Ownership Verification for Graph Neural Networks. S&P, 2024.\\n3. Watermarking Graph Neural Networks by Random Graphs. ISDFS, 2021.\\n\\n\\n**Questions**\\n1. Clarify the differences and novelty of this paper compared with [1].\\n2. Provide empirical comparisons with existing baseline methods.\\n\\n---\\n**Alert for Potential Plagiarism**\\n\\nWe have good reasons to believe that the core methodology of this paper was ported over from our NDSS'25 paper (Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Watermarking Feature Attribution). The authors exaggerate their contribution in this submission by deliberately not citing our NDSS paper. In general, our paper provides a general watermarking method that can be used for both image classification and text generation. This paper is just an implementation of our paper in graph classification, although we did not do GNN experiments in our paper.\\n\\nAlthough this paper was accepted in September 2024, and I know ICLR has a 3-month concurrent work policy. However, our work was uploaded to arxiv in early May this year (https://arxiv.org/abs/2405.04825), not to mention that it had already been reviewed by two conferences (SP, CCS). This paper is not only similar to ours in the general direction but also completely plagiarizes our technology. We believe that the original intention of the concurrent work policy is to protect authors from missing comparative work due to objective reasons, rather than to protect malicious plagiarism due to subjective intention. \\n\\nI urge the authors to explain this issue and provide a proper and comprehensive illustration and comparison of our work.\"}", "{\"comment\": \"Thank you for the follow-up and for acknowledging the revisions to our work; we are glad that you feel the revisions fairly represent your contributions. We appreciate the opportunity to clarify the independent value of our work: our application to graph neural networks, unique verification strategy, and theoretical guarantee stand out as unique contributions of our method.\\n\\nWe believe that our current results sufficiently demonstrate the robustness of our method against various threats. Specifically, we show robustness against fine-tuning and pruning attacks, and we also prove that locating our watermark is NP-hard. That said, we recognize that exploring robustness against adaptive attacks could be valuable future work.\\n\\nThank you again for your thoughtful comments, as they helped us improve our paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer bM1q\", \"comment\": \"Thank you for your response. We aim to address each of your comments below.\\n\\n# **Comparison to other GNN watermarking methods.**\\nIn our paper, we highlight the limitations of backdoor watermarking methods, particularly their introduction of intentional misclassification and ambiguity. Additionally, the metrics for evaluating watermarking success differ fundamentally: prior methods, such as [1] and [2], measure watermarking accuracy (prediction success on backdoor samples), while our approach relies on hypothesis testing to demonstrate statistically significant patterns in explanations. This makes direct comparisons impossible.\\n\\nMoreover, we emphasize that our method does not claim superiority in terms of classification accuracy. Instead, the key advantage of our method lies in addressing critical security concerns, such as the susceptibility of backdoor-based methods to data poisoning and the inherent ambiguity in watermark verification.\\n\\n# **Model extraction attack.**\\n\\nTo address concerns about model extraction attacks, we implemented a knowledge distillation attack [3]. Knowledge distillation has two models: the original \\u201cteacher\\u201d model, and an untrained \\u201cstudent\\u201d model. During each epoch, the student model is trained on two objectives: (1) correctly classify the provided input, and (2) mimic the teacher model by mapping inputs to the teachers\\u2019 predictions. The student therefore learns to map inputs to the teacher\\u2019s \\u201csoft label\\u201d outputs (probability distributions) alongside the original hard labels; this guided learning process leverages the richer information in the teacher\\u2019s soft label outputs, which capture nuanced relationships between classes that hard labels cannot provide. By focusing on these relationships, the student model can generalize more efficiently and achieve comparable performance to the teacher with a smaller model and fewer parameters, thus reducing complexity. \\n\\nWe find that in the absence of a strategically-designed defense, the knowledge distillation attack successfully removes our watermark ($p>0.05$). This is unsurprising, since model distillation maps inputs to outputs but ignores mechanisms that lead to auxiliary tasks like watermarking.\\n\\nTo counter this, we outline a defense framework that would incorporate watermark robustness to knowledge distillation directly into the training process. Specifically, during training and watermark embedding, an additional loss term would penalize reductions in watermark performance. At periodic intervals (e.g., after every x epochs), the current model would be distilled into a new model, and the watermark performance on this distilled model would be evaluated. If the watermark performance (measured by the number of matching indices) on the distilled model is lower than the watermark performance on the main model, a penalty would be added to the loss term. This would ensure that the trained model retains robust watermarking capabilities even against knowledge distillation attacks.\\n\\n# **Performance without watermarking.**\\n\\nWe have updated the paper (Table 1) to include accuracy rates in the absence of watermarking. For your convenience, we list them here. Note that accuracy rates are reasonable both with and without watermarking.\\n**Accuracy (Train/Test)**|||||||\\n|-|-|-|-|-|-|-|\\n||***GCN***||***SGC***||***SAGE***|\\n|**Dataset**|No Watermark|Watermark|No Watermark|Watermark|No Watermark|Watermark|\\n|***Photo***|91.3/89.4|90.9/88.3|91.4/89.9|90.1/88.0|94.2/90.8|94.1/88.2|\\n|***PubMed***|88.6/85.8|85.7/81.4|88.8/85.9|85.3/81.4|90.5/86.0|91.1/81.2|\\n|***CS***|98.5/90.3|96.8/89.8|98.4/90.3|96.7/90.1|100.0/88.4|99.9/88.9|\\n\\n# **Other graph tasks.**\\n\\nWhile our results focus on node classification, our method can be extended to other graph learning tasks, such as edge classification and graph classification. The key is to obtain $n \\\\times F$ features matrices that we can derive explanations from. For node classification, each of our $T$ target subgraphs yields a $n \\\\times F$ matrix of node features. For edge classification, we can generate similar matrices for edge features. For graph classification, we can accomplish something similar by having each row in a $n \\\\times F$ matrix represent the features of an individual graph; rather than have one such matrix for each of $T$ target subgraphs, we instead have one matrix per *collection* of subgraphs.\\n\\n\\nPlease see further details of this extended logic in Appendix Section A.5 in our revisions.\\n\\n---\\n\\n## References\\n\\n[1] Zhao, Xiangyu, Hanzhou Wu, and Xinpeng Zhang. \\\"Watermarking graph neural networks by random graphs.\\\" 2021 9th International Symposium on Digital Forensics and Security (ISDFS). IEEE, 2021.\\n\\n[2] Xu, Jing, et al. \\\"Watermarking graph neural networks based on backdoor attacks.\\\" 2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P). IEEE, 2023.\\n\\n[3] Gou, Jianping et al. \\u201cKnowledge Distillation: A Survey.\\u201d International Journal of Computer Vision 129 (2020): 1789 - 1819.\"}", "{\"title\": \"Response to Reviewer 7pbz\", \"comment\": \"Thank you for your thoughtful review and comments. We address your concerns below.\\n\\n# **GNN explanation intuition.**\\n\\nTo streamline our description, we have moved some of the detailed math in Section 3.2 to Appendix A.2, keeping only the final regression problem in the main paper. Here, we provide high-level insight into how the explanation process works:\\n\\n- Our explanations are the output of a closed-form ridge regression problem applied to a transformation of the node features and node predictions (using Gaussian kernel matrices inspired by GraphLIME, though we do not use GraphLIME itself).\\n- While ridge regression assigns non-zero weights to all features, the relative magnitudes of the regression coefficients indicate the most influential features. However, our goal is not to analyze the explanations themselves for feature importance. Instead, we use the explanations as an embedding space for our watermark.\\n- During our embedding process, we intentionally manipulate the feature attribution vectors to align with a predefined watermark, prioritizing the ability to match a specific pattern over interpretability. Ridge regression\\u2019s non-zero weights provide the flexibility needed for precise alignment with the desired watermark. By contrast, Lasso regression\\u2019s sparsity-inducing nature would limit the ability to manipulate attribution vectors for embedding, making it less effective for our purpose.\\n\\nTo clarify, our method does not rely on the explanations to interpret GNN predictions. Instead, we use them as a structured space where ownership information can be securely embedded and later verified. \\n\\n# **Adversary knowledge.**\\n\\nOur primary assumption is that the adversary does not have knowledge about the watermarked subgraphs. While we consider scenarios where they might know additional details, such as the number and size of the subgraphs, this is not our main focus. Rather, we include this analysis to demonstrate that even with such additional knowledge, our method remains robust. These scenarios are possible but not central to our default assumptions; they are presented to emphasize the resilience of our approach under varying conditions.\\n\\n# **Verifier knowledge.**\\n\\nOur point in stating that the verifier only has black-box access is to emphasize that no additional information or model access is necessary for verification. Even in a scenario where the verifier is restricted to this limited level of access, they can still successfully perform the verification task. This highlights the robustness of our approach, which does not rely on access to model parameters or other privileged information.\"}", "{\"title\": \"Update: Extension to other Graph Learning Tasks\", \"comment\": \"# Update: Extension to other Graph Learning Tasks\\n\\nThank you again for your previous comments on our work. Your question about applications to other graph learning tasks was mirrored by Reviewer ub6g, and to address this, we now have additional results to share.\\n\\nIn our previous comments, we outlined a framework for extending our methodology to other graph learning tasks (more details in Appendix Section A.5). To illustrate the broader applicability of our approach, we have now included results from watermarking explanations of predictions on the widely-used MUTAG [1] dataset within the graph classification framework. These results consistently yield p-values below 0.05 under varied configurations, demonstrating that our method can be effectively applied to other graph learning tasks.\\n\\n||# Collections |of Subgraphs||\\n|-|-|-|-|\\n||4|5|6||\\n|**p-value**|0.039|0.037|<0.001||\\n|**Acc (train/test)**|0.915/0.900|0.954/0.929|0.915/0.893|\\n\\nWhile the edge classification framework would align closely with our node classification approach, watermarking for graph classification requires more modifications to our framework; therefore, the above results provide strong evidence of the flexibility of our method.\\n\\n---\\n\\n[1] Debnath, Asim Kumar et al. \\u201cStructure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. Correlation with molecular orbital energies and hydrophobicity.\\u201d Journal of medicinal chemistry 34 2 (1991): 786-97.\"}", "{\"title\": \"Revisions Summary\", \"comment\": [\"Thank you for your feedback. We'd like to summarize the revisions to our paper. (These changes are highlighted in the current revision in blue. Note: the majority of these changes are in the appendix.)\", \"**Clarifications**\", \"Added references to [1] in the Introduction, Related Works, and Section 4.1 to better contextualize our contributions.\", \"Moved detailed mathematical derivations from Section 3.2 to Appendix A.2 to streamline our main text.\", \"Clarified GraphLIME\\u2019s relation to our explanation method.\", \"**Experiments**\", \"Added a non-watermark accuracy column to Table 1.\", \"Tested the distribution of match indices for normality to ensure the validity of our statistical approach (Appendix Table 2).\", \"Corrected an error in our fine-tuning results in Figure 2 and included additional fine-tuning results (Appendix Figure 12).\", \"Obtained preliminary results extending our method to graph classification (Appendix Table 4).\", \"**Future Work**\", \"Outlined potential extensions of our method to other graph learning tasks and defenses against model distillation attacks, highlighting its versatility and adaptability (Appendix A.6)\", \"While these changes address reviewer feedback and improve the paper\\u2019s clarity and robustness, they do not alter the core methodology or main contributions of our work. Instead, they provide additional evidence to support our claims, improve presentation, and demonstrate the method\\u2019s broader scope and applicability. These refinements enhance the paper\\u2019s overall quality without shifting its scope or focus.\", \"---\", \"[1] Shao, Shuo, Yiming Li, Hongwei Yao, Yiling He, Zhan Qin, and Kui Ren. \\u201cExplanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Watermarking Feature Attribution.\\u201d Proceedings of the Network and Distributed System Security Symposium (NDSS), 2025.\"]}", "{\"summary\": \"In this paper, the authors propose to watermarking GNNs via the explanations of GNNs. Their theory indicates that locating the watermark is an NP-hard problem and the experiments demonstrate that the proposed attack is robust to current defenses.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1 This motivation of the paper is clear.\\n\\n2 The experiments are quite solid.\\n\\n3 The soundness of the proposed method is good.\", \"weaknesses\": \"1 Although the authors claim that the proposed method can avoid data pollution and eliminates intentional misclassification. However, compared to the traditional methods, it requires a more tight threat model: the training process of target models. It is not always accessible to all scenarios.\\n\\n2 In Figure 2, the results indicate that fine-tuning fails to remove the effectiveness of the embedded watermark. However, as far as I know, the effectiveness of the fine-tuning attack is closely related to the detailed setting of the learning rate, A larger learning rate might attack the proposed method.\\n\\n3 In this paper, the authors only narrows the attack in the node classfication. It is only a sub-task in the graph community. I am not sure whether the attack is general enough to be applied to another task, such as edge classification.\", \"questions\": \"1 As far as I know, GNNs is only one type of models for node classification, how about more powerful models such as graph transformers?\\n\\n2 In Table 1, the authors only show the accuracy after inserting watermark. How about accuracy with performing watermarking? Those data is needed to demonstrate the stealthiness of the attack.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes a watermarking method for GNN based on the explanations on graphs. During the training, the work adds an additional watermarking embedding loss to ensure the explanations' features of the watermarked subgraphs aligning with the water pattern. Then the model owners can use the similarity between the model's explanations and the watermark pattern to claim their ownership. Experiments and theoretical analysis demonstrate their effectiveness of ownership verification and robustness to pruning and fine-tuning.\", \"strength\": \"1. The proposed method based on GNN's explanation is interesting and effective under their evaluation.\", \"weaknesses\": \"1. The experiment settings are limited and may be wired considering its threat model.\\n\\n2. The work cannot work under a model extraction attack.\\n\\n3. The writing for its method needs to be more clear.\\n\\nAs all reviewers are worried about the method's scope and effectiveness, I think this author should add more before this paper is accepted. I would like to propose some new comments for the authors of this paper:\\n\\n1. I think the watermarking scenarios for GNN can only be valid when the GNN is complex or the training dataset is really large. Therefore, I suggest you conduct your experiments at least in the OGBN dataset. I think fine-tuning a small GNN trained on Pubmed, Photo, CS or MUTAG is not realistic, as they can be retrained easily with only a few minutes.\\n\\n2. More complex GNNs should be considered, more commonly used GNNs like GAT, GCNII, APPNP, or even ResGCN should be considered to enhance the paper's impacts.\\n\\n3. Add more experiments on other tasks like Graph classification.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers are worried about the method's scope and effectiveness. As for the plagiarism claim raised by the public comments, both reviewers and I do not agree. However, most reviewers and I agree that the authors should do more before this paper is accepted.\"}", "{\"summary\": \"The paper proposes a graph neural network watermarking method that embeds ownership information into the explanations of predictions. During the training process, the node classification loss maintains the model's classification accuracy, while the watermark embedding loss ensures that the feature explanations of the watermarked subgraphs align with the watermark pattern. After training, the model owner can claim its ownership by the similarity between the explanations and the watermark pattern. The author proves that it is an NP-hard problem for attackers to find the watermarked subgraphs. The method mitigates the impact of intentional misclassification compared to traditional backdoor-based watermarking techniques. Experiments on three datasets in node classification task illustrate the effectiveness of ownership verification and its robustness to pruning and fine-tuning.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper first introduces node feature explanations into GNN watermarking and demonstrates the potential of model explanations for applications in the GNN watermarking domain. By embedding the watermark within GNN prediction explanations rather than in the training data, the method mitigates risks associated with data pollution and misclassification.\", \"weaknesses\": \"1. Several prior methods have been proposed in the area of GNN watermarking, such as [1] and [2]. It is important to consider experiments that compare the watermark accuracy and downstream task performance of these methods.\\n2. The paper highlights the robustness of the method against pruning and fine-tuning. Given that adversaries may attempt model extraction attacks, as described in [3], it is advisable to address the robustness against such extraction attacks.\\n\\n[1] Zhao, Xiangyu, Hanzhou Wu, and Xinpeng Zhang. \\\"Watermarking graph neural networks by random graphs.\\\" 2021 9th International Symposium on Digital Forensics and Security (ISDFS). IEEE, 2021.\\n\\n[2]Xu, Jing, et al. \\\"Watermarking graph neural networks based on backdoor attacks.\\\" 2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P). IEEE, 2023.\\n\\n[3]Shen, Yun, et al. \\\"Model stealing attacks against inductive graph neural networks.\\\" 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022.\", \"questions\": \"1. How does the model perform without watermarking? It would be helpful to compare the classification accuracy on downstream tasks before and after embedding the watermark.\\n2. The explanation method GraphLIME is applied only to the node classification task in the paper. Can this watermarking method be extended to other downstream tasks in graph-based applications?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Author(s):\\n\\nThank you for your response and the revisions made to the paper. I appreciate that the revised version now includes a comparison with our previous NDSS paper, and I am pleased to see that the contributions based on our work have been fairly presented. Given the methodological similarities between the two works, I believe such a discussion is indeed essential.\\n\\nI also acknowledge the effort the authors have made in designing and implementing a watermarking solution specifically tailored for GNNs. As such, I respectfully request that the reviewers and area chair reconsider the contributions of this paper in light of the revised version.\", \"ps\": \"While I agree that the reviewers partially addressed my concerns about the methodology in terms of hypothesis testing, I still have some doubts about the comprehensiveness of the paper in terms of robustness. I suggest the authors conduct more adaptive attacks for discussion.\\n\\nBest Regards,\"}", "{\"title\": \"Response to Reviewer k1GL\", \"comment\": \"Thank you for your question. We address your question below.\\n\\n# Black-Box Explanations and Verification: Clarification\\n\\nIt is true that GraphLIME itself is an iterative, gradient-based process that requires *white-box* model access. To clarify, we do not directly apply GraphLIME: rather, the Gaussian kernel matrices used by GraphLIME inspire our closed-form regression problem. (We've revised our Introduction and Section 3.2 to emphasize this point.) This process takes node features and node predictions as inputs and outputs a feature attribution vector. Notably, it operates in a *black-box setting*, meaning it does not require access to the protected model's internal parameters. This query is highly-efficient, since the closed-form regression only requires a single query.\\n\\nWe welcome further questions on this topic.\"}", "{\"comment\": \"Dear reviewers,\\n\\nThanks for serving as a reviewer. As the discussion period comes to a close and the authors have submitted their rebuttals, I kindly ask you to take a moment to review them and provide any final comments.\\n\\nIf you have already updated your comments, please disregard this message.\\n\\nThank you once again for your dedication to the OpenReview process.\\n\\nBest,\\n\\nArea Chair\"}", "{\"title\": \"Response to submission 2714\", \"comment\": \"I sincerely appreciate the authors' responses and read other reviewers' comments. However, this paper requires many revisions, which may not be allowed by ICLR. Therefore, I will not vote for acceptance.\"}" ] }
EgJhwYR2tB
Speculative Knowledge Distillation: Bridging the Teacher-Student Gap Through Interleaved Sampling
[ "Wenda Xu", "Rujun Han", "Zifeng Wang", "Long Le", "Dhruv Madeka", "Lei Li", "William Yang Wang", "Rishabh Agarwal", "Chen-Yu Lee", "Tomas Pfister" ]
Recent advances in knowledge distillation (KD) have enabled smaller student models to approach the performance of larger teacher models. However, popular methods such as supervised KD and on-policy KD, are adversely impacted by the knowledge gaps between teacher-student in practical scenarios. Supervised KD suffers from a distribution mismatch between training with a static dataset and inference over final student-generated outputs. Conversely, on-policy KD, which uses student-generated samples for training, can suffer from low-quality training examples with which teacher models are not familiar, resulting in inaccurate teacher feedback. To address these limitations, we introduce Speculative Knowledge Distillation (SKD), a novel approach that leverages cooperation between student and teacher models to generate high-quality training data on-the-fly while aligning with the student's inference-time distribution. In SKD, the student proposes tokens, and the teacher replaces poorly ranked ones based on its own distribution, transferring high-quality knowledge adaptively. We evaluate SKD on various text generation tasks, including translation, summarization, math, and instruction following, and show that SKD consistently outperforms existing KD methods across different domains, data sizes, and model initialization strategies.
[ "LLM", "Knowledge Distillation", "On-policy" ]
Accept (Poster)
https://openreview.net/pdf?id=EgJhwYR2tB
https://openreview.net/forum?id=EgJhwYR2tB
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ySHX7LhqB9", "wPqRbGKdny", "vMQLZA6URU", "rMXUs3xRCW", "meVjCp3sXU", "kagmaDmBsw", "fK3TxFHlNW", "f6Ulj4VSKh", "edH2HlbnuU", "dmsnnX3bv7", "bhnB4WPKir", "WsyT433YJI", "VSdcmCaZF8", "RLIT9sEAvU", "RCK1NIXV1e", "Nf1hddXkKJ", "NC6oFXk72G", "KNcrvhyWrM", "Ivlt1KYAzW", "IG9JjosQsv", "IG6QUQtjay", "G1fUCUSELz", "EKltu3n7XV", "CJecqPVEfH", "CGZ7U8JV7E", "AEENLi70Ko", "8AwO4yHgfD", "2eRX177Hb9" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732644237416, 1732212863981, 1732215224250, 1732376383865, 1732211006564, 1732220003686, 1732213461995, 1732382417389, 1732215943971, 1733186217350, 1733257266059, 1737523493775, 1734536913437, 1732215877499, 1732567160019, 1732381000547, 1732219730928, 1732210933167, 1730548209372, 1732224735698, 1733167008530, 1732211114126, 1732376537688, 1730714631259, 1732381498850, 1732429659854, 1732376437327, 1730646540664 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Submission2249/Reviewer_VY1t" ], [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2249/Area_Chair_ehY4" ], [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Submission2249/Reviewer_EjjZ" ], [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Submission2249/Reviewer_EjjZ" ], [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Submission2249/Area_Chair_ehY4" ], [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Submission2249/Reviewer_VY1t" ], [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Submission2249/Reviewer_cyWE" ], [ "ICLR.cc/2025/Conference/Submission2249/Authors" ], [ "ICLR.cc/2025/Conference/Submission2249/Reviewer_cyWE" ] ], "structured_content_str": [ "{\"title\": \"Thanks for the reply\", \"comment\": \"Thanks a lot for your suggestion and score updates! We will update paper accordingly.\"}", "{\"title\": \"Response to Reviewer EjjZ regarding to weakness a) Part 1\", \"comment\": \"Corresponding to \\\"(a) I like the idea of evaluating the student\\u2019s suggestions by the teacher and selecting the Top-K based on the teacher\\u2019s probabilities. However, what concerns me is the computational cost of the method. Given that this is the main contribution of the work, I think it has to be more clearly studied. Specifically, what I really like to see is how often the line 5 in Algorithm 1 is triggered. In Lines 212-216 it is essentially claimed that this trigger rate is decreased over training time, I think it has to be evidenced with a figure (x axis being the training steps and y axis being the trigger rate). In a similar sense, I think it should be reported what is the final rate (i.e how likely it is for the student\\u2019s tokens to be in TopK of the teacher\\u2019s after the distillation).\\\"\\n\\nThanks for the comment! We provide an estimate below, and will include details in the final version.\\n\\nThe following Table is Table \\n\\n|From Gemma-2b-it | 0 | 50 | 100 | 150 | 200 | 250 | 300 | 750 | 1000 | 1225 |\\n|----------------|--------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|\\n| SKD-25 | 0.409 | 0.145 | 0.117 | 0.097 | 0.092 | 0.081 | 0.071 | 0.058 | 0.055 | 0.054 |\\n\\nThe following Table is Table 2\\n\\n|From SFT Gemma-2b | 0 | 50 | 100 | 150 | 200 | 250 | 300 | 750 | 1000 | 1225 |\\n|----------------|--------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|\\n| SKD-25 | 0.047 | 0.035 | 0.025 | 0.017 | 0.012 | 0.011 | 0.008 | 0.002 | 0.001 | 0.001 |\\n\\nFollowing the suggestion, we compute the rejection rate (rejection rate is defined as the total number of tokens that are re-sampled by teacher divided by the total number of tokens generated by both teacher and student) of the teacher model under different student initializations with training steps. We randomly select 100 samples from the training data. We estimated rejection rates of teacher model Supervised FT Gemma-7B with student models gemma-2b-it and gemma-2b-sft. For gemma-2b-it as a student, rejection rate was initially high (around 40.9% teacher resample during one sequence generation). It dramatically reduced at the first 150 steps from 40.9% to 9.7%. It gradually drops as training goes further. This validates our hypothesis that at early training iterations, SKD behaves more supervised KD-like (around 40.9% tokens are replaced by teacher) and as training goes on towards the end of training, SKD will behave more on-policy like with only 5% tokens will be replaced.\\n\\nWe further estimate rejection rate with the SFTed gemma-2b student model. We find out with relatively better student model initializations (also higher proximity to teacher). Rejection rate can be quite low at the beginning (around 4.7%). The trend stays the same as the rejection rate continues to decrease as the training step increases. Despite relatively low rejection rate for SKD under SFTed student model, we still demonstrate SKD\\u2019s superior performance against on-policy KD at Table 2.\"}", "{\"title\": \"Response to Reviewer EjjZ regarding to weakness b)\", \"comment\": \"Corresponds to \\\"(b) Moreover, I think the value for K should be studied. Essentially for high Ks the method should get somewhat similar to On Policy KD and for lower ones should be closer to Supervised KD? So there should be a sweet spot. Also I\\u2019m wondering if the value for K needs to be adjusted based on the training step? In other words, maybe higher or lower Ks would be better at the early or later stages of distillation? Some ablation experiments (with different numbers for K) could be helpful in addressing this.\\\"\\n\\nThanks to the reviewer for bringing up the ablation of the value of K. We studied the effect of different Ks across different tasks at Appendix B. Our findings are threefold: 1) The overall performance trend confirms our hypothesis that as K increases, SKD degenerates to on-policy KD like behavior, while decreasing K degenerates SKD to supervised KD like behavior. Consequently, neither the highest nor the lowest values of K are optimal. 2) We can find a sweet spot for K across tasks (Appendix B). Although for generalization purposes, we didn\\u2019t over-optimize k for each task. 3) We also showed a wide range of Ks can outperform both supervised KD and on-policy KD. \\n\\nWe also explored experiments by adjusting K during training steps (decreasing K). However, we didn\\u2019t obtain positive signals (it didn\\u2019t beat constant K approach). Our hypothesis is that SKD assumes that as the model improves, it naturally acts like on-policy KD. However, heuristically decreasing K can force it to replace the token and behave like supervised KD, which introduces off-policy tokens, hurting the performance of the model. However, we encourage future works to explore more into this adaptive K direction. \\n\\n\\nFinally, we conducted experiments using adaptive K strategy at decoding steps. Due to the autoregressive nature of our base models, their token generation becomes more deterministic with longer sequences. This raises the question of whether reducing K as the sequence grows could further improve performance. We experimented with a linear decay of K, decreasing it by $1$ for each additional generated token. We set a minimum K to ensure our algorithm does not degenerate to supervised KD towards the end of decoding.\\n\\n\\n| | Translation | GSM8K | \\n|----------------|--------|----------------|\\n| SKD(K=25) | 75.3 | 29.1 | \\n| SKD(K from 50 to 25) | 75.1 | 25.2 | \\n| SKD(K from 25 to 15) | 75.0 | 28.7 | \\n\\nWe didn\\u2019t receive strong positive signals through this approach. We believed that future works could explore more into this direction since this is still a naive adaptive k scheme.\"}", "{\"title\": \"Additional questions?\", \"comment\": \"Hi Reviewer VY1t,\\n\\nThank you again for your time and efforts in our work and your feedback is quite constructive to revise our paper! We have meticulously considered and responded to each of your concerns. If you have any additional questions, we are very happy to address them during this open discussion period. \\n\\nWe hope that most of the reviewer\\u2019s concerns have been addressed and, if so, they would reconsider their assessment. We\\u2019d be happy to engage in further discussions.\\n\\nBest regards,\\nThe authors\"}", "{\"title\": \"Response to Reviewer VY1t regarding to weakness 2)\", \"comment\": \"Corresponding to \\\"It is a bit hard to understand the task-agnostic distillation setting. I see the training and test sets are both based on the math reasoning datasets. Where does the shifts come from?\\\"\\n\\nIn task-agnostic distillation, the model is trained on a diverse set of tasks within a domain (in this case, mathematics) to develop general capabilities. However, it is then evaluated on specific tasks within that domain that were not seen during training. In our study, the MathInstruct model was trained on a variety of math datasets (MathQA [2], math reasoning [3], tabular processing [4]) to learn general mathematical skills. We then tested its ability to generalize to new, unseen math problems in the ASDIV [5], SVAMP [6], and GSM_plus [7] datasets. This approach, where the model is trained on a broad set of tasks and tested on specific ones, is consistent with the task-agnostic distillation framework described in [1].\\n\\n[1] Advancing LLM Reasoning Generalists with Preference Trees\\n\\n[2] MathQA: Towards interpretable math word problem solving with operation-based formalisms\\n\\n[3] NumGLUE: A suite of fundamental yet challenging mathematical reasoning tasks.\\n\\n[4] Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning\\n\\n[5] A diverse corpus for evaluating and developing English math word problem solvers\\n\\n[6] Are NLP models really able to solve simple math word problems?\\n\\n[7] Gsm-plus: A comprehensive benchmark for evaluating the robustness of llms as mathematical problem solver\"}", "{\"title\": \"Response to Reviewer cyWE regarding to weakness 2)\", \"comment\": \"Responding to \\\"Task-Specific Adaptability: SKD maintains overall sampling quality through dynamic adjustments of token-level sample quality and adaptive switching between teacher and student sampling. However, the authors do not explore how to customize these mechanisms for different task types or complexities.\\\"\\n\\nThank you to the reviewer for highlighting the importance of the ablation study on the value of K. We have detailed the effects of varying K across different tasks in Appendix B of our paper. Our findings are threefold: 1) The overall performance trend confirms our hypothesis that as K increases, SKD degenerates to on-policy KD like behavior, while decreasing K degenerates SKD to supervised KD like behavior. Consequently, neither the highest nor the lowest values of K are optimal. 2) We can find a sweet spot for K across tasks (Appendix B, K=50 for translation, K=5 for summarization and K=25 for GSM at Figure 6). For generalization purposes, we didn\\u2019t over-optimize k for each task. 3) We also showed a wide range of Ks can outperform both supervised KD and on-policy KD. \\n\\nWe suspect that a higher K value may be optimal for more constrained, close-ended tasks such as translation. This setting allows the student model to explore a range of sample qualities, both high and low, to ensure better coverage and diversity within a limited sample space. For open-ended tasks such as conversation summarization, we recommend a lower K value in SKD. Higher K values may cause students to explore an excessive range of answer options, potentially diverting the student model\\u2019s learning trajectory. By reducing K, we facilitate more frequent interleaving of teacher sampling, which helps ensure that the samples maintain meaningful learning signals. We leave more rigorous study to future research.\"}", "{\"title\": \"Response to Reviewer EjjZ regarding to weakness a) Part 2\", \"comment\": \"With the estimated rejection rates as reference, we can further analyze computational costs of this approach. Leveraging the isoFLOP cost calculation provided in [3], we can estimate the inference cost of our models using this formula: 2*N*D+2*N*sqrt(D)*L\", \"n\": \"number of parameters, D: number of input tokens, L: number of decoding steps\\n\\nFrom our rejection rate, we can compute the expected rejection rate of gemma-2b-it during the training process. We use 10% as our expected rejection rate (maybe slightly higher than actual rejection rate during training but easier for analysis). Therefore, the expected acceptance rate is 90%. We can estimate expected generated tokens under one run of line 5. Algorithm 1 shows a simplified algorithm where gamma=1. In practice, we use gamma=5. \\n\\nWe can use geometric series to estimate expected generated tokens similar to [4]. E(# generated tokens with one run)=(1-0.9^5)/(1-0.9)=4.1. For simplicity of analysis, we round it to 4. For 5 student proposed tokens, the teacher evaluates all tokens in parallel and likely rejects the last one. Therefore, we can approximately claim that in SKD, the student model will generate 80% of tokens while the teacher will generate 20% of them. To put this into inference time formula, \\n\\nAssuming N_student=2B, N_teacher=7B, D=256, L=200, ground truth is not available and needs to be sampled from the teacher (practical case). \\n\\nCost (on-policy) = 2*2B*256+2*2B*16*200=13,824B\\n\\nCost (SKD) = 2*2B*256+2*2B*16*200 * 80%+ 2 * 7B * 256 + 2*7B*16*200 * 20% = 23,808B\\n\\nCost (Supervised KD) = 2 * 7B * 256 + 2*7B*16*200 = 48,384B\\n\\nSampling cost of SKD is roughly 1.72 x bigger than on-policy KD, supervised KD is roughly 3.5 x bigger than on-policy KD. **SKD can save roughly 50% of compute from directly sampling from the teacher.**\\n\\nNote that we approximately calculate this compute using an instruction-tuned student model. If we use the SFTed student model, more computes can be further saved. This is mostly to give the reviewer a rough overview of how much computes that we needed for SKD sampling.\\n \\n[3] Scaling Laws for Neural Language Models\\n\\n[4] Fast Inference from Transformers via Speculative Decoding\"}", "{\"title\": \"Possibility of ajusting the confidence score\", \"comment\": \"We kindly request if the reviewer could consider raising the overall rating or confidence score, provided there are no additional concerns.\\n\\nBest,\\nAuthors\"}", "{\"title\": \"Response to Questions\", \"comment\": \"(a) It is not 100% clear to me why would one replace a bad token with a token sampled from teacher's distribution. Could the authors give some intuition of why not re-attempt the sampling from student's distribution (line 4 in Algorithm 1) to replace the last token but under the condition that it's within TopK of the teacher? In otherwords, maybe the bad token sampled from student's distribution can be replaced with a better token that is sampled from the same distribution, rather than one that comes from the teacher's. If it is within the computational budget, an experiment to demonstrate this could also help prove the point.\\n\\nThis is an implementation choice. For example, top-K tokens among teacher vocabulary can have a cumulative probability of 0.1 at student\\u2019s vocab distribution. Yes, we can continuously resample from the student model until we yield a token within the teacher's top-K. It still will be an off-policy token because it is not the top token likely to be generated by students. \\n\\n(b) If I understood correctly, your method does not add any computational overhead to the distillation process, as it is a simple probability check and a token replacement. If so, please assert this both in rebuttal and also in the paper. Otherwise, a clear discussion on computational overhead is missing.\\n\\nThanks! Please refer to the response to weakness a). We will include these details in our final version.\"}", "{\"title\": \"Reviewer Response\", \"comment\": \"Dear Authors,\\n\\nThank you for the further clarification, especially the evaluation of quantifying the OOD issue suffered by on-policy KD. Considering the technical contribution and soundness, I think score 6 is appropriate. I have raised my confidence score to reflect my evaluation.\"}", "{\"title\": \"Discussion phase summary\", \"comment\": \"We again thank all reviewers for their helpful reviews and comments. We are glad to see all reviewers give us positive feedback after discussion period, showing reviewers' genuine interest in our work.\\n\\nDuring the rebuttal period, we addressed all questions from all three reviewers. Each reviewer acknowledged our responses and expressed no further concerns.\\n\\nTo conclude the discussion phase, we briefly summarize the main concerns raised in the reviews and corresponding improvements in the paper.\\n\\n**Quantify OOD issues for poor student samples**\\n\\n To address concerns from reviewer VY1t, we utilized teacher perplexity (PPL) to quantify out-of-distribution (OOD) issues in poor on-policy samples. We clearly demonstrate OOD issues using PPL.\\n\\n**Show SKD's convergence compared to on-policy and supervised KD**\\n\\nTo address concerns from reviewer cyWE, we demonstrated the effectiveness of SKD by showing that it achieves superior validation convergence compared to both on-policy KD and supervised KD under instruction tuned and SFTed initialization across three tasks.\\n\\n**Show student token rejection rate during SKD training process and estimate SKD's computation**\\n\\nTo address concerns from EjjZ, we charted the teacher model\\u2019s rejection rates against tokens proposed by the student model during the training process and calculated the theoretical computational demands of SKD compared to sampling from student and teacher models. SKD can save roughly 50% of compute from directly sampling from the teacher.\\n\\n**Ablation study of K**\\n\\nWe also pinpoint reviewer cyWE and EjjZ for the ablation of K results in Appendix B. In the section, we showed a wide range of Ks can outperform both supervised KD and on-policy KD. We can find the most optimized K across tasks. However, for generalization purposes, we didn\\u2019t over-optimize k for each task. \\n\\n\\n**Additional revisions**\\n\\nAside from main discussion listed above, we also addressed some smaller comments and suggestions from the reviewers.\\n\\nWe will incorporate these updates into the camera-ready version of the paper.\\n\\nOverall, we believed that reviewer's comments helped us greatly improved paper quality. We thank reviewers for your efforts during rebuttal and discussion period.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"metareview\": \"The paper presents a Knowledge Distillation method that circumvents issues in both supervised KD and on-policy KD [Reviewer VY1t]\\n\\nThe authors evaluate SKD across several types of tasks including translation, summarization, math, and instruction following, consistently outperforming baseline methods across different domains, data sizes, and initialization strategies. SKD shows robust performance in both task-specific and task-agnostic distillation, with strong generalization on complex math tasks and advantages in low-data scenarios. [Reviewer cyWE]\\n\\nThe idea is clearly motivated in the paper and makes sense. The quantitative results also seem promising (across tasks the method is mostly outperforming prior work by a good margin). Especially the method is consistently outperfoming On Policy KD, which supports the proposed idea of switching to teacher tokens if the student tokens are bad. [Reviewer EjjZ]\\n\\nAuthors have addressed concerns from reviewers and have updated the manuscript with sufficient empirical results to substantiate claims and to enhance the robustness, scalability and convergence of the proposed SKD.\", \"additional_comments_on_reviewer_discussion\": \"There have been intense discussions during the rebuttal phase and the authors rebuttal has answered most of the concerns raised by the reviewers. Authors have also improved the manuscript and the supplementary materials to improve the quality of the paper - to include discussion on computational overhead (the computation of this overhead is not straightforward as agreed by Reviewer EjjZ), Quantified OOD issues for poor student samples (reviewer VY1t), Showed SKD's convergence compared to on-policy and supervised KD (reviewer cyWE), included ablation studies (Reviewers cyWE and EjjZ for the ablation of K results)\\n\\nAll the three reviewers have increased their score and/or confidence after the rebuttal phase. This paper can clearly be accepted as a poster.\"}", "{\"title\": \"Response to Reviewer EjjZ regarding to weakness from c-g\", \"comment\": \"Thank you so much for your suggestion! We will revise each part accordingly!\\n\\nRegarding to \\\"c) In Line 119, it is stated that \\u201cthe temperature t introduces randomness\\u201d. I think this is not entirely correct as it\\u2019s the sampling that introduces randomness. The temperature adjusts the distribution so it adjusts the randomness (as it is also stated correctly in the following sentence).\\\"\\n\\nWe revised phrase in Line 119 and mentions that \\\"The temperature adjusts the distribution. Therefore, it adjusts the randomness when we sample from the model\\\".\\n\\nRegarding to \\\"(d) In Line 184, \\u201cCorrect previous mistakes\\u201d, it is not clear what mistake is referring to. Is a mistake a poorly generated Token? What does correcting refer to?\\\"\\n\\nWe revised our writing at Line 184. \\\"previous mistakes\\\" refers to \\\"poor tokens\\\" that generated from student model\\n\\nRegarding to \\\"(f) In the pseudo-code (Algorithm 1), some variables, such as alpha in line 3, are not defined.\\\"\\n\\nalpha is defined at line 164. We will make it more clear in the paper.\\n\\nRegarding to \\\"(g) This is not really an issue, rather a suggestion. A tiny figure to explain concepts defined in Section 3.1 would be very helpful. Basically a 1-row figure divided into maybe 4-5 sub-columns, with every column showing how each of the prior work applies the loss at token-level. Maybe something like Figure 1 in https://arxiv.org/pdf/2104.09044 I think Figure (2) can be refactored a bit so that it also includes other prior work. \\\"\\n\\nThank you so much for this suggestion! We will revise accordingly!\"}", "{\"title\": \"Additional concern if any\", \"comment\": \"Hi Reviewer VY1t,\\n\\nThank you again for your time and efforts in our work and your feedback is quite constructive to revise our paper! We have addressed all reviewer cyWE and EjjZ's concerns (cyWE raised score for us). As discussion period moves towards end, is there any additional concern that you would like us to address? We are happy to engage in further discussion.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Concerns are Adressed\", \"comment\": \"I thank the authors for their detailed reply and my concerns are now addressed.\\n\\nRegarding weakness (a), I think the provided tables on the rejection rate are helpful and clearly support the claims of the paper. Therefore, please add these at least to the supplement and clearly refer to them in the main paper.\\n\\nRegarding weakness (b), it seems that this was an oversight from my side, as it was already discussed in the paper. I thank the authors for addressing this as well.\\n \\nRegarding the computational overhead w.r.t on-policy (and similarly gains over SupervisedKD), I think it would be easier if these also appear in supplement. I think however, given that it's not very straight forward to calculate the overhead (as it is based on the rejection rate, the K, etc.) I think empirical result (overall distillation time) would be easier for future readers to grasp.\"}", "{\"title\": \"Response to Reviewer cyWE regarding to weakness 1)\", \"comment\": \"Responding to \\\"Impact of Early Low-Quality Samples Lacks In-Depth Analysis: The SKD framework uses interleaved sampling and speculative decoding to handle low-quality samples generated by the student model during its initial training stages. However, the authors do not analyze how these low-quality samples may impact complex tasks, such as mathematical reasoning. Early low-quality samples could affect the stability of teacher feedback, slow convergence, and compromise overall stability. The authors could consider adding an analysis of model convergence behavior across tasks of varying complexity in their experiments to gain a more comprehensive understanding of SKD\\u2019s convergence characteristics in complex scenarios.\\\"\\n\\nThank you so much for your constructive feedback. We included analysis below.\\n\\nTable below shows validation loss respect to training steps at summarization (**For IT initialization**)\\n\\n| Steps | SKD | On-policy | Supervised KD | \\n|----------------|--------|----------------|----------------|\\n| 0 | 4.44 | 4.44 | 4.44 |\\n| 50 | 0.54 | 0.60 | 0.55 |\\n| 100 | 0.46 | 0.50 | 0.45 |\\n| 200 | 0.42 | 0.44 | 0.42 |\\n| 300 | 0.40 | 0.42 | 0.42 |\\n| 350 | 0.40 | 0.42 | 0.42 |\\n\\nTable below shows validation loss respect to training steps at GSM (**For IT initialization**)\\n\\n| Steps | SKD | On-policy | Supervised KD | \\n|----------------|--------|----------------|----------------|\\n| 0 | 2.32 | 2.32 | 2.32 |\\n| 50 | 0.27 | 0.44 | 0.39 |\\n| 100 | 0.23 | 0.28 | 0.32 |\\n| 200 | 0.21 | 0.25 | 0.28 |\\n| 300 | 0.20 | 0.23 | 0.27 |\\n| 350 | 0.19 | 0.22 | 0.26 |\\n\\nTable below shows validation loss respect to training steps at Translation (**For IT initialization**)\\n\\n| Steps | SKD | On-policy | Supervised KD | \\n|----------------|--------|----------------|----------------|\\n| 0 | 5.96 | 5.96 | 5.96 |\\n| 50 | 1.35 | 2.32 | 1.16 |\\n| 100 | 1.06 | 2.09 | 0.99 |\\n| 200 | 0.91 | 2.05 | 0.95 |\\n| 300 | 0.87 | 2.00 | 1.04 |\\n| 350 | 0.83 | 1.91 | 1.03 |\\n\\nWe report the validation loss of various baseline KD strategies across 350 training steps, distilling Gemma-7B to Gemma-2B-it model. Our findings demonstrate that SKD consistently achieves the lowest validation loss by the final iteration across all three tasks. We note an important observation regarding on-policy samples: their lower quality can cause slow initial convergence (as seen at step 50 across all tasks), where on-policy KD exhibits a higher validation loss compared to both supervised KD and SKD. In translation tasks, the student models trained with on-policy KD continue to show higher validation losses than those trained with SKD or supervised KD due to the inferior quality of on-policy samples, as reflected by the lower COMET scores detailed in Table 1. For simpler tasks, like translation and summarization, supervised KD can achieve lower initial validation losses compared to both SKD and on-policy KD. However, SKD rapidly improves in later iterations, eventually reaches the lower validation loss compared to two baselines by the end of the training process, where supervised KD has the risk of overfitting (shown at translation table). Conversely, for more complex tasks, such as the GSM task, SKD\\u2019s interleaved sampling method enables rapid convergence and consistently lower validation losses throughout the training process, outperforming both on-policy and supervised KD.\"}", "{\"title\": \"Response to Reviewer VY1t regarding to weakness 1)\", \"comment\": \"Correspond to \\\"Although it is relatively straightforward to agree with the authors that the on-policy KD suffers from the OOD issue, is there a way to show how severe the issue is? For example, would it be possible to quantitatively evaluate?\\\"\\n\\nThe following table is Table1\\n| From SFTed Gemma-7B | On-policy (student only) | SKD (teacher and student interleaved) | Supervised KD (ground truth) | \\n|----------------|--------|----------------|----------------|\\n| Teacher PPL | 46.8 | 10.8 | 1.57 |\\n\\n\\nThanks for the insightful feedback. We followed the suggestion and calculated the per-sample perplexity (PPL) using both the teacher (supervised FT Gemma-7B) and the student (instruction-tuned Gemma-2B). Our analysis included ground truth data, on-policy samples proposed by the student, and samples generated by SKD. We randomly selected100 data points from translation training data for student and SKD sampling. For supervised KD, we used ground truth output.\\n\\nAs shown in Table1, the teacher model assigned a significantly higher PPL (46.8 vs 1.57) to student-proposed samples compared to ground truth, indicating a substantial divergence from the teacher's distribution. Notably, SKD-proposed samples achieved a much lower PPL (10.8), suggesting that the teacher model can more accurately assess the quality of SKD-generated samples.\\n\\nThe following table is Table2\\n\\n| From Gemma-2B-it | On-policy (student only) | SKD (teacher and student interleaved) | Supervised KD (ground truth) | \\n|----------------|--------|----------------|----------------|\\n| Student PPL | 44.0 | 358 | 5606.3 |\\n\\nTable 2 demonstrates that ground truth samples exhibit high PPL (5606.3) under the student model, confirming their off-policy nature. Crucially, our SKD approach, which combines on-policy token proposals with teacher corrections, substantially reduces PPL (358) compared to supervised KD. This indicates that SKD samples are more aligned with the student model's distribution, facilitating effective learning.\"}", "{\"summary\": \"The method seems very simple and, to some extent, sounds like an interpolation between two distillation strategies. Specifically, in Supervised KD the next-token distribution of student and teacher are trained to be similar on a fixed text, while in On-policy it is done on student\\u2019s generated text. The method thus proposes to evaluate the tokens of the student and see if it\\u2019s within TopK of the teacher\\u2019s, if not, the generated tokens of the student is discarded and the rest is followed with teacher\\u2019s tokens. But if the generated token satisfies the TopK check, we continue with student\\u2019s tokens.\\n\\n\\nThe idea is clearly motivated in the paper and makes sense. The quantitative results also seem promising (across tasks the method is mostly outperforming prior work by a good margin). Especially the method is consistently outperfoming On Policy KD, which supports the proposed idea of switching to teacher tokens if the student tokens are bad.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"**(a)** I think the paper is extremely well written. Especially for someone who might not be fully familiar with how Knowledge Distillation is applied for the LLMs, I really appreciate how clearly the authors explain prior work and how they relate to each other. With only the first pass, I was able to grasp the ideas across related work and the main contribution of the work.\\n\\n\\n**(b)** I think the simplicity of method is one of its advantages. Simply checking the student token's quality, based on whether it's within TopK of the teacher's seems simple and intuitive. (see also Question a).\\n\\n\\n**(c)** The quantitative results, especially compared to On Policy KD, seem promising (please see the disclaimer in Weaknesses section).\", \"weaknesses\": \"**(a)** I like the idea of evaluating the student\\u2019s suggestions by the teacher and selecting the Top-K based on the teacher\\u2019s probabilities. However, what concerns me is the computational cost of the method. Given that this is the main contribution of the work, I think it has to be more clearly studied. Specifically, what I really like to see is how often the line 5 in Algorithm 1 is triggered. In Lines 212-216 it is essentially claimed that this trigger rate is decreased over training time, I think it has to be evidenced with a figure (x axis being the training steps and y axis being the trigger rate). In a similar sense, I think it should be reported what is the final rate (i.e how likely it is for the student\\u2019s tokens to be in TopK of the teacher\\u2019s after the distillation).\\n\\n\\n**(b)** Moreover, I think the value for K should be studied. Essentially for high Ks the method should get somewhat similar to On Policy KD and for lower ones should be closer to Supervised KD? So there should be a sweet spot. Also I\\u2019m wondering if the value for K needs to be adjusted based on the training step? In other words, maybe higher or lower Ks would be better at the early or later stages of distillation? Some ablation experiments (with different numbers for K) could be helpful in addressing this.\\n\\n------ Minor issues ------\\n\\n**(c)** In Line 119, it is stated that \\u201cthe temperature t `introduces` randomness\\u201d. I think this is not entirely correct as it\\u2019s the sampling that introduces randomness. The temperature adjusts the distribution so it `adjusts` the randomness (as it is also stated correctly in the following sentence).\\n \\n**(d)** In Line 184, \\u201cCorrect previous mistakes\\u201d, it is not clear what `mistake` is referring to. Is a mistake a poorly generated Token? What does `correcting` refer to?\\n\\n**(e)** Line 250: \\u201cSince our assumption `is` that\\u201d\\n\\n**(f)** In the pseudo-code (Algorithm 1), some variables, such as alpha in line 3, are not defined.\\n\\n**(g)** This is not really an issue, rather a suggestion. A tiny figure to explain concepts defined in Section 3.1 would be very helpful. Basically a 1-row figure divided into maybe 4-5 sub-columns, with every column showing how each of the prior work applies the loss at token-level. Maybe something like Figure 1 in https://arxiv.org/pdf/2104.09044 I think Figure (2) can be refactored a bit so that it also includes other prior work. \\n\\n\\n\\n\\n------ Disclaimer ------\\n\\nI\\u2019m not very familiar with KD approaches for LLM, and therefore it is possible that I haven't noticed a missing baseline. I would really appreciate it if other reviewers could also verify if the set baselines is complete.\", \"questions\": \"**(a)** It is not 100% clear to me why would one replace a `bad token` with a token sampled from teacher's distribution. Could the authors give some intuition of why not re-attempt the sampling from student's distribution (line 4 in Algorithm 1) to replace the last token but under the condition that it's within TopK of the teacher? In otherwords, maybe the `bad` token sampled from student's distribution can be replaced with a `better` token that is sampled from the same distribution, rather than one that comes from the teacher's. If it is within the computational budget, an experiment to demonstrate this could also help prove the point.\\n\\n\\n**(b)** If I understood correctly, your method does not add any computational overhead to the distillation process, as it is a simple probability check and a token replacement. If so, please assert this both in rebuttal and also in the paper. Otherwise, a clear discussion on computational overhead is missing.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer cyWE regarding to weakness 3)\", \"comment\": \"Corresponding to \\\"Effect of Model Initialization Lacks Discussion: While the authors examine SKD\\u2019s performance under different initialization conditions, such as instruction tuning and supervised fine-tuning, they do not discuss how initial model quality affects SKD\\u2019s convergence efficiency and stability. The authors might consider investigating convergence behavior under various initialization qualities, particularly in low-quality scenarios, by dynamically adjusting the proportion of teacher samples to stabilize the training process.\\\"\\n\\nTable below shows validation loss respect to training steps at Summarization (**For SFTed initialization**)\\n\\n| Steps | SKD | On-policy | Supervised KD | \\n|----------------|--------|----------------|----------------|\\n| 0 | 1.56 | 1.56 | 1.56 |\\n| 50 | 0.52 | 0.54 | 0.51 |\\n| 100 | 0.44 | 0.47 | 0.44 |\\n| 200 | 0.41 | 0.43 | 0.42 |\\n| 300 | 0.40 | 0.42 | 0.42 |\\n| 350 | 0.40 | 0.42 | 0.42 |\\n\\nTable below shows validation loss respect to training steps at GSM (**For SFTed initialization**)\\n\\n| Steps | SKD | On-policy | Supervised KD | \\n|----------------|--------|----------------|----------------|\\n| 0 | 0.244 | 0.244 | 0.244 |\\n| 50 | 0.214 | 0.215 | 0.208 |\\n| 100 | 0.202 | 0.204 | 0.200 |\\n| 200 | 0.188 | 0.195 | 0.191 |\\n| 300 | 0.182 | 0.183 | 0.189 |\\n| 350 | 0.178 | 0.180 | 0.188 |\\n\\nTable below shows validation loss respect to training steps at Translation (**For SFTed initialization**)\\n\\n| Steps | SKD | On-policy | Supervised KD | \\n|----------------|--------|----------------|----------------|\\n| 0 | 1.14 | 1.14 | 1.14 |\\n| 50 | 0.886 | 0.891 | 0.997 |\\n| 100 | 0.859 | 0.862 | 0.948 |\\n| 200 | 0.843 | 0.832 | 0.980 |\\n| 300 | 0.812 | 0.831 | 0.929 |\\n| 350 | 0.807 | 0.813 | 0.908 |\\n\\nWe report the validation loss of various baseline KD strategies across 350 training steps, distilling Gemma-7B to SFTed Gemma-2B model. Our findings demonstrate that SKD consistently achieves the lowest validation loss by the final iteration across all three tasks. Our finding shows that as base quality improves, on-policy KD consistently achieves lower validation loss compared to supervised KD at the end of training steps. This suggests that on-policy approaches can achieve better convergence when sample quality is improved. Most importantly, under both model initialization, SKD can achieve the best convergence with 350 training steps compared to both supervised KD and on-policy KD.\\n\\nOur training process remains stable across on-policy KD, supervised KD, and general KD because KL loss provides a relatively stable training objective. The key distinction between these approaches lies in the effectiveness of the samples for learning, as measured by their ability to minimize validation loss. Through comparative analysis of validation losses from both instruction-tuned (IT) and supervised fine-tuned (SFT) student model initializations, we have demonstrated that SKD achieves superior validation convergence compared to the baseline methods, by achieving the lowest validation loss.\"}", "{\"title\": \"Did you review the responses? Any concerns please?\", \"comment\": \"Dear Reviewer\\nAs we are getting closer to the review period, please engage in the discussion with the authors and let us know if you have further questions or are the responses satisfactory?\\n\\nThanks!\"}", "{\"title\": \"Response to Reviewer VY1t regarding to weakness 3) and 4)\", \"comment\": \"Corresponding to \\\"In table 2, it is obvious that the improvements are rather marginal. In Table 1, it seems the proposed method falls short behind another baseline. Would you explain? Significant tests for Table 1 & 2 could further strengthen the results.\\\"\\n\\nThanks for the question. We conducted a permutation significance test for SKD against all baselines for translation and summarization. SKD outperforms all baselines (p<0.05) except for SKD vs ImitKD at Gemma-7B to Gemma-2B-IT on summarization data and SKD vs supervised KD/SFT at Qwen-7B to Qwen-0.5B-it on translation data. \\n\\nWe would like to clarify the scale of our metrics used in the evaluation (See Appendix F). COMET-22 is a learned metric from human rating data. Despite its high correlation to humans, its score value is not directly interpretable as in traditional accuracy metrics. From Table2, SKD improves the best baseline at SFTed Gemma-2b and SFTed Qwen2-0.5B, with score difference 0.4 and 0.7 respectively. Referring to the recent paper [2], under 0.4 score difference, 75% metric decisions will be aligned with human judgments, and under 0.7 score difference, 85% metric decisions will be aligned with human judgments. This indicates SKD's improvements are significant.\\n\\nRegarding your comment about Table 1, where you mentioned that \\u201cit seems the proposed method falls short behind another baseline,\\u201d We believe the reviewer is referring to our results from Qwen7B to Qwen-0.5B-it translation. Table 1 shows that Qwen-0.5B-it\\u2019s performance under Assamese-to-English translation is extremely poor. Therefore, with this poor model initialization, on-policy or imitKD can have deteriorated performance due to low quality student samples. On the other hand, samples coming from supervised KD/SFT have advantages due to much better quality samples. We show that SKD can dynamically adjust sampled tokens to achieve close performance to supervised KD/SFT. In practice, we will use SFTed initialized Qwen-0.5B model as the student. From Table 2, we showed that SKD outperforms all baselines and beats the best baseline ImitKD by 0.7 COMET score. \\n\\n[2] Navigating the Metrics Maze: Reconciling Score Magnitudes and Accuracies\"}", "{\"title\": \"Additional questions?\", \"comment\": \"Hi Reviewer cyWE,\\n\\nThank you again for your time and efforts in our work and your feedback is quite constructive to revise our paper! We have meticulously considered and responded to each of your concerns. If you have any additional questions, we are very happy to address them during this open discussion period. If you don't have additional concerns, would you kindly consider raising the score? Thanks again for your time and efforts!\\n\\nBest regards,\\n\\nThe authors\"}", "{\"summary\": \"This paper proposed a new knowledge distillation method called Speculative KD (SKD), inspired by speculated decoding, that addresses the drawbacks from supervised KD and on-policy, while keeping the advantages of both worlds. Specifically, to circumvent the issue of supervised KD, SKD enforces the teacher model to evaluate the output of the student model. To alleviate the \\\"cold-start\\\" OOD problem of on-policy KD, SKD filters out intermediate tokens that the teacher is unlikely to generate, and re-samples from the teacher.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The motivation of the paper is convincing. Supervised KD is known to suffer from distribution shift, while on-polich KD suffers from the OOD problem.\", \"The organization is well and the paper is carefully written. The problem setting is well formulated.\", \"All-around empirical results pretty much validates the effectiveness of the method. The authors conducted experiments on four tasks and one task agnostic distillation.\"], \"weaknesses\": [\"Although it is relatively straightforward to agree with the authors that the on-policy KD suffer from the OOD issue, is there a way to show how severe the issue is? For example, would it be possible to quantitatively evaluate?\", \"It is a bit hard to understand the task-agnostic distillation setting. I see the training and test sets are both based on the math reasoning datasets. Where does the shifts come from?\", \"In table 2, it is obvious that the improvements are rather marginal. In Table 1, it seems the proposed method falls short behind another baseline. Would you explain?\", \"Significant tests for Table 1 & 2 could further strengthen the results.\"], \"questions\": [\"Is there a way to show how severe the OOD issue of on-policy KD is? For example, would it be possible to quantitatively evaluate?\", \"Why does the setting proposed in Section 5.2 a \\\"task-agnostic\\\" one?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks a lot for your reply!\", \"comment\": \"Thanks a lot for your reply! I am glad that you feel more confident about our work! We will definitely include all empirical and theoretical time complexity in the Appendix. All other revisions will be done in the paper as well.\"}", "{\"title\": \"Score Changed\", \"comment\": \"Regarding weaknesses 1 and 3:\\nThe authors' validation loss analysis across different tasks (summarization, GSM, and translation) and initialization conditions (IT and SFT) provides strong evidence of SKD's effectiveness. I suggest adding these results to the supplement and referencing them in the main paper.\", \"regarding_weakness_2\": \"This was my oversight, as the authors have provided a detailed analysis in Appendix B, demonstrating how K values correlate with task characteristics. I appreciate their investigation of this aspect.\\n\\nI thank the authors for their detailed reply, which has addressed my concerns. Accordingly, I have increased my score to 6.\"}", "{\"title\": \"Additional questions?\", \"comment\": \"Hi Reviewer EjjZ,\\n\\nThank you again for your time and efforts in our work and your feedback is quite constructive to revise our paper! We have meticulously considered and responded to each of your concerns. If you have any additional questions, we are very happy to address them during this open discussion period.\\n\\nBest regards, \\n\\nThe authors\"}", "{\"summary\": \"The authors present Speculative Knowledge Distillation (SKD), a new KD method for effective knowledge transfer from large teacher models to smaller student models. SKD combines on-policy sampling with speculative decoding, allowing students to propose tokens while the teacher verifies and refines them, addressing issues in traditional KD methods like distribution mismatches and low-quality samples.\\n\\nThe authors evaluate SKD across tasks including translation, summarization, math, and instruction following, consistently outperforming baseline methods across different domains, data sizes, and initialization strategies. SKD shows robust performance in both task-specific and task-agnostic distillation, with strong generalization on complex math tasks and advantages in low-data scenarios.\", \"contributions\": \"1. Introducing SKD, which integrates on-policy sampling and speculative decoding for adaptive, high-quality knowledge transfer.\\n2. Demonstrating SKD\\u2019s superior performance across varied tasks, outperforming supervised and on-policy KD.\\n3. Validating SKD\\u2019s adaptability and efficiency, especially in diverse model initializations and low-data settings, without requiring additional supervised fine-tuning.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The paper introduces Speculative Knowledge Distillation (SKD) to address the distribution mismatch between training and inference in traditional knowledge distillation. By combining on-policy sampling with speculative decoding, SKD enables efficient knowledge transfer, allowing the student model to autonomously generate and refine its samples.\\n\\n2. The authors validate SKD across various task scenarios, including translation, summarization, arithmetic reasoning, and instruction following, covering both task-specific and task-agnostic applications. Through comparisons with multiple baseline methods (e.g., supervised KD, on-policy KD, and ImitKD), the experimental setup is comprehensive and broadly applicable, providing certain support for SKD\\u2019s performance.\\n\\n3. SKD\\u2019s design builds on imitation learning theory, lending theoretical validity to the approach. Additionally, the paper includes detailed implementation steps and hyperparameter settings, providing actionable guidance for future research.\", \"weaknesses\": \"1. Impact of Early Low-Quality Samples Lacks In-Depth Analysis: The SKD framework uses interleaved sampling and speculative decoding to handle low-quality samples generated by the student model during its initial training stages. However, the authors do not analyze how these low-quality samples may impact complex tasks, such as mathematical reasoning. Early low-quality samples could affect the stability of teacher feedback, slow convergence, and compromise overall stability. The authors could consider adding an analysis of model convergence behavior across tasks of varying complexity in their experiments to gain a more comprehensive understanding of SKD\\u2019s convergence characteristics in complex scenarios.\\n\\n2. Task-Specific Adaptability: SKD maintains overall sampling quality through dynamic adjustments of token-level sample quality and adaptive switching between teacher and student sampling. However, the authors do not explore how to customize these mechanisms for different task types or complexities.\\n\\n3. Effect of Model Initialization Lacks Discussion: While the authors examine SKD\\u2019s performance under different initialization conditions, such as instruction tuning and supervised fine-tuning, they do not discuss how initial model quality affects SKD\\u2019s convergence efficiency and stability. The authors might consider investigating convergence behavior under various initialization qualities, particularly in low-quality scenarios, by dynamically adjusting the proportion of teacher samples to stabilize the training process.\", \"questions\": \"Question 1: The paper does not analyze how early low-quality samples generated by the student model may impact complex tasks, such as mathematical reasoning. How do the authors believe these early samples could affect the stability of teacher feedback and convergence rates?\", \"question_2\": \"Although the authors' framework incorporates dynamic adjustments and adaptive switching between teacher and student sampling, how do they envision customizing these mechanisms for different task types or complexities?\", \"question_3\": \"While the authors examine SKD\\u2019s performance under various initialization conditions, how does initial model quality affect convergence efficiency and stability?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
EgEyoZvyDw
Long-Term 3D Point Tracking By Cost Volume Fusion
[ "Hung Nguyen", "Chanho Kim", "Rigved Naukarkar", "Fuxin Li" ]
Long-term point tracking is essential to understand non-rigid motion in the physical world better. Deep learning approaches have recently been incorporated into long-term point tracking, but most prior work predominantly functions in 2D. Although these methods benefit from the well-established backbones and matching frameworks, the motions they produce do not always make sense in the 3D physical world. In this paper, we propose the first deep learning framework for long-term point tracking in 3D that generalizes to new points and videos without requiring test-time fine-tuning. Our model contains a cost volume fusion module that effectively integrates multiple past appearances and motion information via a transformer architecture, significantly enhancing overall tracking performance. In terms of 3D tracking performance, our model significantly outperforms simple scene flow chaining and previous 2D point tracking methods, even if one uses ground truth depth and camera pose to backproject 2D point tracks in a synthetic scenario.
[ "3d point tracking", "scene flow" ]
Reject
https://openreview.net/pdf?id=EgEyoZvyDw
https://openreview.net/forum?id=EgEyoZvyDw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zd4Crq0sXZ", "ySDGmK8qrZ", "uvfKrYIi4E", "mS5gBoiPM4", "lOneMDVSJw", "fzSmoQb9WM", "WMG8Wyt7mc", "VyfmxD3Czp", "TX9yoLU9if", "KdxKCMVzy4", "CeY6TIKer3", "Bhq9HLfdZ4", "6sZpVKeFqX" ], "note_type": [ "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "meta_review" ], "note_created": [ 1730671500070, 1732780486769, 1737523476247, 1733286038893, 1732780343793, 1732780308496, 1730755935034, 1732780436522, 1733246395217, 1730690708318, 1730557257200, 1732780463551, 1734688330417 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1949/Reviewer_iJKS" ], [ "ICLR.cc/2025/Conference/Submission1949/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1949/Authors" ], [ "ICLR.cc/2025/Conference/Submission1949/Authors" ], [ "ICLR.cc/2025/Conference/Submission1949/Authors" ], [ "ICLR.cc/2025/Conference/Submission1949/Reviewer_45rn" ], [ "ICLR.cc/2025/Conference/Submission1949/Authors" ], [ "ICLR.cc/2025/Conference/Submission1949/Authors" ], [ "ICLR.cc/2025/Conference/Submission1949/Reviewer_HSRg" ], [ "ICLR.cc/2025/Conference/Submission1949/Reviewer_SZMC" ], [ "ICLR.cc/2025/Conference/Submission1949/Authors" ], [ "ICLR.cc/2025/Conference/Submission1949/Area_Chair_n8by" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents a feed-forward approach for tracking points in 3D space across long video sequences. While previous methods mostly work in 2D and can produce physically implausible motions when projected to 3D, this method directly tracks in 3D space and works without test-time optimization. The key technical contributions are a coarse-to-fine approach with cost volume fusion at each level (using transformers to combine past appearance and motion information) and explicit occlusion handling. The authors show their method outperforms both scene flow chaining and 2D tracking methods that are projected into 3D, even when those methods have access to ground truth depth and camera poses (all experiments performed on synthetic data).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"I like the paper's idea and the problem it tackles. The problem setting is quite close to scene flow - while there have been many scene flow papers recently, it remains an unsolved problem. The paper's main novelty lies in its network architecture. Although its individual components aren't novel (the overall architecture resembles recent point tracking papers like PIPs, Harley et al.), applying it to long-term 3D tracking is a nice contribution.\", \"The paper is well-written and well-structured, with clearly described technical details. I especially appreciate the well-written related work section, which helps readers who don't work directly in scene flow/tracking.\", \"The paper demonstrates good quantitative improvements over SOTA methods.\"], \"weaknesses\": [\"Missing evaluations on real data: Although the paper has extensive evaluations against recent scene flow methods, it lacks any evaluation or even qualitative results on real data (I checked the supplementary material as well). While I understand that real data doesn't always provide good ground truth, especially for dynamic objects, making evaluation challenging, the absence of results on common benchmarks like the KITTI Scene Flow dataset is unfortunate. The complete lack of real-data results, even qualitative ones, makes it difficult to assess the paper's practical impact. I would like to see more results on real data in the rebuttal.\", \"Missing discussion about online tracking: The paper doesn't compare to test-time optimization methods like Wang et al. (2023) and Luiten et al. (2023), arguing that their method \\\"tracks points online without test-time optimization.\\\" While I agree it's a feed-forward method that's potentially less computationally expensive than test-time optimization, \\\"online\\\" is a strong claim. There's no discussion or computational cost analysis comparing their method to existing approaches. (There is some runtime information in the supplementary, but no comparison against other methods).\"], \"questions\": \"I'm optimistic about the paper overall, but would like to see more results on real data to complement the synthetic data presented in the paper.\", \"final_review_after_rebuttal\": \"I thank the authors for addressing some of my concerns and correcting some terminology misunderstanding. That being said, I agree with other reviewers about the writing quality could be improved, and I'm not completely convinced about the novelty of the proposed method. Given these reasons, I improved the rating to 6 (slightly above acceptance).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Weakness #1: The paper could benefit from improved writing, especially in the sections describing the method.**\\n\\nPlease leave us comments on the parts that are unclear in the final comments. We will update the final version to reflect the comments.\\n\\n\\n**Weakness #2: The authors should cite, discuss, and compare their method to SpatialTracker and Scene Tracker**\\nPlease see the common response for a comparison against SpatialTracker. SceneTracker is an unpublished concurrent work with ours, and it is similar to SpatialTracker as both are iterative, non-online 2.5D approaches that estimate in the camera coordinate frame rather than the world coordinate frame. We\\u2019ll cite it as well.\\n\\n**Weakness #3: comparing the learning-based approach with optimization-based 3D trackers for a reference**\\n\\nPlease see the common response for why such a comparison is not possible.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We sincerely thank all the reviewers for their insightful feedback and valuable suggestions, which have greatly contributed to improving the quality of this paper.\"}", "{\"comment\": \"We thank the reviewer for the insightful feedback. Most of the questions the reviewer asked were responded in the common feedback. Hence we would hope the reviewer would check those. In terms of the TapVID-Davis dataset, that dataset does not have any 3D ground truth and we didn\\u2019t claim to improve the 2D performance over the 2D long-term tracking baselines, hence we decided to test on the ADT dataset instead.\", \"below_are_answers_to_your_questions_in_addition_to_the_common_answers\": \"**Question #2: Was the model trained on the combination of Kubric and PointOdyssey?**\\n\\nThe model was trained on Kubric training set for the Kubric experiment. It is then fine-tuned on the PointOdyssey dataset before tested on the corresponding testing split for the PointOdyssey experiment.\"}", "{\"comment\": \"We thank the reviewers for their valuable feedback. Below, we address common questions and have marked updates in the paper and supplementary document in red. Please see our individual responses for other queries.\\n\\n**Q1. Comparison with Spatial Tracker, and whether it\\u2019s necessary to lift depths to 3D points for 3D tracking**\\n\\nThanks for pointing out the reference, which we have now cited. SpatialTracker operates in 2.5D, outputting query point locations in the next 2D frame, combining camera and point motion. In contrast, our method works in 3D world coordinates, outputting motion in world space. Additionally, SpatialTracker uses iterative estimation and future information, making it offline, while our approach relies only on past data.\\n\\nWe would argue SpatialTracker is still primarily a method based on 2D matching, with the additional triplane encoder providing some 3D neighborhood information. Such methods have difficulties keeping still points to be still in featureless regions as their output mixes camera motion and point motion. Although one can tune ARAP (As-Rigid-As-Possible) parameters to improve on that, it is difficult to balance the need of less rigidity constraints for points with deformable motion and the need for more rigidity constraint for still points. Hence, there would be a tendency to either oversmooth deformable motion, or to have the still points always jitter around. The separation between moving and still points is easier to deal with once we have an explicit 3D approach, where still points should have a motion of zero. See also our results below for this effect in a real-world dataset. Indeed, one constraint of the true 3D approach such as ours is the need to know camera pose. But nowadays there are many SfM and SLAM approaches that obtain excellent camera pose estimates and one can use the outputs from those systems.\\n\\n**Q2. Real world results**\\n\\nWe present results on the real-world Tapvid3D-ADT dataset (minival split), detailed in Section 8 of the supplementary. The dataset includes ground truth depths from 3D scanning and estimates from ZoeDepth, a monocular depth estimator with an unknown, frame-dependent scaling factor, complicating frame-to-frame reconciliation. SpatialTracker did not test 3D tracking on real datasets for this reason (as noted in their paper). For a fair comparison, we use ground truth depth data for both methods.\\n\\n\\n| Method | 3D-AJ $\\\\uparrow$| APD $\\\\uparrow$| OA $\\\\uparrow$\\n| -------- | ------- | ------- | ------- |\\n| Ours (fine-tuned on one part of ADT) | 49.9 | 63.5 | 88.5 |\\n| Ours (trained only on Tapvid) | 11.7 | 25.0 | 56.1 |\\n| Spatial Tracker | 10.1 | 14.9 | 72.6\\n\\nThe results show that even our method trained only on the synthetic Kubric dataset outperforms SpatialTracker, whereas our results when fine-tuned on a small part of ADT improves the performance further significantly. By tracking points explicitly in the world coordinate system rather than the camera coordinate system, we effectively disentangle camera motion from actual point motion, enabling a clear distinction between dynamic and static points. Qualitatively, we see that with SpatialTracker the static points cannot stay static and keep jittering around, which is a main reason that led to their lower scores.\\n\\n**Q3: Comparison to test-time optimization approaches such as Wang et al (2023) and Luiten et al (2023)**\\n\\nWe did not use Dynamics 3D Gaussian Splatting (Luiten et al. 2023) as our baseline because it cannot be applied to the 3D point tracking benchmarks such as Kubric, PointOdyssey, and TAPVid-3D, where only a **monocular** video is provided as input for the point tracking task. Dynamics 3DGS only uses depth information to initialize the set of Gaussians. During the optimization process, it introduces **many new** Gaussians and relies on **multiple views** with a sufficiently large baseline apart from each other to reason about 3D properties of these Gaussians. In addition to that, Dynamic 3DGS assumes that all scene points are visible in the first frame with their multi-camera setup. As a result, after initializing 3D Gaussians in the first frame, they only optimize for Gaussian motions in the subsequent frames without having any capability of handling new points not appearing in the first frame, making this work not applicable to the 3D point tracking benchmarks mentioned above. \\n\\nUnlike Dynamics 3DGS, OmniMotion (Wang et al 2023) does not output tracks in 3D. It utilizes a 3D space called a quasi-3D canonical volume for accurate 2D tracking, but this 3D space cannot be used to convert their 2D tracks into 3D tracks in the real world coordinate system, making them ineligible for the 3D point tracking evaluation. We updated the main paper to clarify this difference. \\n\\n**Q4. Qualitative Results**\\n\\nWe added 2 more videos (*kubric_comparison.mp4* and *odyssey_comparison.mp4*). More discussion about these new qualitative results can be found in the supplementary document -Section 7.\"}", "{\"summary\": \"This paper introduces a new deep learning framework specifically designed for long-term 3D point tracking, capable of functioning without test-time optimization. Recognizing the limitations of previous 2D tracking methods, the authors have developed a coarse-to-fine tracking approach that leverages a transformer architecture and a cost volume fusion module at each level of processing. This allows for effective integration of multiple past appearances and motion information, significantly enhancing tracking accuracy and performance, particularly through periods of occlusion.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper introduces a deep learning-based framework for long-term 3D point tracking, a first in its domain according to the authors' claims. This approach is impressive how it combines various techniques to address a complex problem.\\n2. The authors provided extensive experimental results over various benchmarks to verify their claims.\", \"weaknesses\": \"1. The authors claim this is the first method for 3D point tracking. However, a highly related work SpatialTracker [A] is neither discussed or compared here. Although SpatialTracker works on 2D images and depths instead of directly on 3D points, its input and output are exactly the same to this work. The authors should discuss and compare with SpatialTracker to verify their points. This is also related to another question, \\\"is it really necessary to lift depths to 3D points for 3D tracking?\\\"\\n\\n2. The reviewer is quite curious about the 2D projection accuracy of the method. The paper provides Table 4, but this is not very convincing because Kubric is a super simple synthetic dataset and the proposed model has been trained on Kubric. The reviewer recommends testing the proposed method on TAP-Vid DAVIS or other real-world datasets with complex point tracks and occultation to verify its effectiveness. \\n\\n\\n[A] SpatialTracker: Tracking Any 2D Pixels in 3D Space. Xiao, Yuxi and Wang, Qianqian and Zhang, Shangzhan and Xue, Nan and Peng, Sida and Shen, Yujun and Zhou, Xiaowei. CVPR 2024.\", \"questions\": \"Reviewers would recommend that the authors provide necessary comparison to SpatialTracker and test the proposed method on TAP Vid DAVIS.\\n\\n\\n(Not important): the paper mentions \\\"then train and test on two separate datasets, TAPVid-Kubric (Doersch et al., 2022) and PointOdyssey (Zheng et al., 2023).\\\". Does this mean the model was trained on the combination of Kubric and PointOdyssey?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the insightful feedback. Your questions about real-world results and qualitative results are answered in the common questions, hence we would hope the reviewer would check those. In terms of the TapVID-Davis dataset, that dataset does not have any 3D ground truth and we didn\\u2019t claim to improve the 2D performance over the 2D long-term tracking baselines, hence we decided to test on the ADT dataset instead.\", \"below_are_answers_to_your_questions_in_addition_to_the_common_answers\": \"**Weakness #1: Difference with FlowNet3D is not clear**\\n\\nFlowNet3D is a scene flow prediction framework that estimates the short-term motion of each point between 2 frames. The goal of this paper is long-term point tracking, which attempts to generate long-term point trajectories that are more consistent than simply chaining scene flow predictions. We developed our own scene flow model, which significantly outperforms FlowNet3D for the 2-frame scene flow task (Table 1). The long-term tracking results by chaining 2-frame scene flow models are presented in Table 2 under the label Scene Flow Chaining, which is significantly worse than our model.\\n\\n**Weakness #2: brief comparison of runtime performance**\\n\\nWe benchmark following methods on the ADT dataset where each video has 300 frames.\\n| Method | FPS|\\n| -------- | ------- |\\n| Ours | 3.6 |\\n| Spatial Tracker | 3.7 |\\n\\nFor the OmniMotion (Wang et al. 2023) method, each video could take **several hours** even on A100 due to the optimization during test time, which is several orders of magnitude slower than our method. Besides, OmniMotion is actually not a true 3D method as it can only output 2D tracks, please see the common answer.\\n\\n**Question #1: what is the meaning of (-26.2) (-15.3) in Table 2?**\\n\\nWe are sorry for the confusion. They mean that the results of these respective methods are 26.2 and 15.3 percentage points lower than the results of our method. Those numbers show the difference from our previous best number (73.1). When we achieved the current best number (73.8), we only updated the table with 73.8. We have updated the difference as well in the table to reflect our current result.\\n\\n**Question #2: It is not clear why novel view in Figure 3 is black and inconsistent with the original view color.**\\n \\t\\nWe did not render the novel view scenes and only visualized the point trajectories with a black background. This is because in the Kubric dataset, the depth map from a single view at a specific timestep allows for only partial scene reconstruction. Consequently, the rendered image of the partial scene in a new view may appear noisy. Therefore, we didn\\u2019t render the novel views and only visualized the point trajectories in that view.\"}", "{\"comment\": \"Dear Reviewers,\\n\\nWe would like to ask if our rebuttal has addressed all the questions. \\nWe are glad to answer if there are more questions.\"}", "{\"summary\": \"The paper presents a novel deep learning framework for long-term 3D point tracking, leveraging cost volume fusion with a transformer architecture. Designed to track dynamic points without test-time optimization, the model incorporates multiple past appearance and motion cues, outperforming existing 2D tracking approaches when projected to 3D.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Utilizes a transformer-based cost volume fusion module to handle occlusions and integrate long-term appearance and motion information.\\n\\nExtensive performance evaluation demonstrates superiority over 2D methods, especially in occluded scenarios.\", \"weaknesses\": \"As the paper claimed that it is the first method to achieve 3D point tracking without test-time optimization, to the reviewer's best of knowledge, some works such as FlowNet3D could also predict point cloud tracking results without test-time optimization. The difference between the proposed method and these methods is not clear.\\n\\nAs the paper strengthens that it does not need test-time optimization, a brief comparison of runtime performance would enhance the practical applicability discussion.\\n\\nThe model relies heavily on synthetic datasets (e.g., TAPVid-Kubric, PointOdyssey), which may limit generalizability to real-world, varied environments.\\n\\nThe paper conducts qualitative comparisons only with CoTracker, which is insufficient and somehow weak. It may need to include more qualitative comparisons in the paper.\", \"questions\": \"There are some unclear expressions in the paper. For example, what is the meaning of (-26.2) (-15.3) in Table 2? It is not clear why novel view in Figure 3 is black and inconsistent with the original view color.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a novel deep-learning method for long-term point tracking in 3D, which generalizes to new points and videos without test-time fine-tuning. Using a coarse-to-fine approach with cost volume fusion modules and transformers, the model effectively integrates appearance and motion information, allowing it to handle occlusions and outperform prior 2D methods and scene flow chaining in 3D tracking accuracy.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper tackles an important problem in computer vision.\\n\\n2. The proposed method outperforms the 2D tracking and Scene Flow Chaining baselines in 3D point tracking, and surpasses scene flow methods in scene flow estimation on the FlyingThings dataset.\\n\\n3. The authors provide ablation studies of their design choices.\", \"weaknesses\": \"1. The paper could benefit from improved writing, especially in the sections describing the method.\\n\\n2. The method claims to be the first online deep learning-based tracking framework capable of tracking any point in 3D point clouds. However, both SpatialTracker [a] (CVPR24) and SceneTracker [b] also propose generalizable 3D tracking methods. The authors should cite, discuss, and compare their method to these works.\\n\\n3. While you propose a learning-based method, comparing it with optimization-based 3D trackers as a reference would be beneficial.\", \"note\": \"You should define the EPE3D metric, which is used in Table 1, in the text.\\n\\n[a] Xiao, Yuxi, et al. \\\"SpatialTracker: Tracking Any 2D Pixels in 3D Space.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[b] Wang, Bo, et al. \\\"SceneTracker: Long-term Scene Flow Estimation Network.\\\" arXiv preprint arXiv:2403.19924 (2024).\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the insightful feedback. Your questions about real-world results and comparisons with test-time optimization methods are answered in the common questions.\", \"below_are_answers_to_your_questions_in_addition_to_the_common_answers\": \"**Weakness #2: Online is a strong claim.**\\n\\nThe reviewer may have some misunderstanding about the term \\u201conline\\u201d. \\u201cOnline\\u201d as used in common computer vision literature merely means that the method makes predictions depending only on **current and past information**, which is true for our algorithm but not true for many baselines, e.g. SpatialTracker, OmniMotion and Dynamic Gaussian Splatting which uses the entire sequence to make predictions of earlier frames (offline, as they use information from the **future** for every frame).\"}", "{\"metareview\": \"In this paper, the authors proposed a long-term 3D point tracking method with cost volum fusion, which is performed at each level using a transformer architecture. The method outperforms the important baseline by projecting 2D tracking into 3D. The reviewers have a discussion after rebuttal, and there are still some limitations of the paper. More comprehensive comparisons with SpatialTracker should be included, and it is still unconvinced why SpatialTrack is considered 2.5D. Also, while the camera poses are assumed known for the proposed method, it does not substantially surpass methods without using camera poses. Due to the reasons, I recommend a decision of rejection.\", \"additional_comments_on_reviewer_discussion\": \"Initially the reviewers raised concerns on the comparisons with SpatialTracker, the novelty and contributions (if it is really the first method to achieve 3D point tracking without test-time optimization), more experimental comparisons and evaluations, and discussions. The authors addressed some of them, but there are still important issues as I summarize in the metareview. Therefore I recommend a decision of rejection.\"}" ] }
Eg32tDGgF5
DO GENERATIVE MODELS LEARN RARE GENERATIVE FACTORS?
[ "Fasih Haider", "Edward Moroshko", "Yuyang Xue", "Sotirios A. Tsaftaris" ]
Generative models are becoming a promising tool in AI alongside discriminative learning. Several models have been proposed to learn in an unsupervised fashion the corresponding generative factors, namely the latent variables critical for capturing the full spectrum of data variability. Diffusion Models (DMs), Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) are of particular interest due to their impressive ability to generate highly realistic data. Through a systematic empirical study, this paper delves into the intricate challenge of how DMs, GANs and VAEs internalize and replicate rare generative factors. Our findings reveal a pronounced tendency towards the memorization of these factors. We study the reasons for this memorization and demonstrate that strategies such as spectral decoupling can mitigate this issue to a certain extent.
[ "Representation Learning" ]
https://openreview.net/pdf?id=Eg32tDGgF5
https://openreview.net/forum?id=Eg32tDGgF5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yYF4QS1Elo", "yB7Y2vlZSu", "sMNLVNFGqU", "lE95DDEmMX", "VQVG33yEzY" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1729999926124, 1730650252035, 1730118346317, 1732540498581, 1729216304254 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3871/Reviewer_VVb7" ], [ "ICLR.cc/2025/Conference/Submission3871/Reviewer_wLEM" ], [ "ICLR.cc/2025/Conference/Submission3871/Reviewer_qBRD" ], [ "ICLR.cc/2025/Conference/Submission3871/Authors" ], [ "ICLR.cc/2025/Conference/Submission3871/Reviewer_DzhW" ] ], "structured_content_str": [ "{\"summary\": \"This paper explores how generative models like VAEs, GANs, and DMs learn rare generative factors. Through an empirical study on skewed datasets, it finds that these models often fail to generalize rare factors and instead memorizing rare generative factors. The paper proposes a mitigation strategy using spectral decoupling for GANs, which partially addresses this issue.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper explores an interesting problem in the learnability of rare generative factors.\\n2. It provides concrete examples of the downstream applications of RGFs, such as in medical imaging, literary text generation, and vehicle classification.\", \"weaknesses\": \"1. There is no dedicated section for related work, which makes it difficult to understand how the proposed approach builds on or differs from existing work on memorization and generalization in generative models.\\n2. A single statistical method (z-test) is insufficient to fully evaluate RGF learning.\\n3. The spectral decoupling technique should be introduced more comprehensively earlier in the methodology section, with a justification for its potential effectiveness across all generative models, not just GANs.\\n4. Lack of other comparison methods of mitigating RGF memorization.\\n5. Focusing only on GANs for the memorization analysis weakens the generalizability of the results. A detailed investigation of memorization in VAEs and DMs is needed to support broader conclusions.\", \"questions\": \"Please see the weakness above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a systematic study of generative models (VAEs, GANs, and DMs) and their capability to learn rare generative factors (RGFs). By creating both balanced and skewed datasets, it is investigated, whether these models generalize RGFs or merely memorize them. Authors show that Spectral Decoupling (SD) helps alleviate memorization in GANs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe study addresses a crucial problem, focusing on how models handle rare but impactful factors in data. Very valuable for applications in medicine and other fields with inherently imbalanced datasets.\\n2.\\tThe authors develop a decent experimental setup to surface the problem in an approachable manner.\", \"weaknesses\": \"Relation to disentanglement and causality literature missing: This is my main criticism of this work. The problem that it tackles is very well-known in deep generative modeling. DGMs are known to take `shortcuts` instead of learning the causal generative mechanism if the problem is not well constrained. A rich body of work on learning disentangled representation in VAEs and GANs tackles practically the same issue. More recently, there are works at the intersection of causality and disentanglement in DGMs, for example [1]. The current work, as presented does not relate back to these works or clarifies if and how its findings are different or complimentary to the domain of disentangled and/or causal DGMs.\\n\\n[1] Zhang, J., Greenewald, K., Squires, C., Srivastava, A., Shanmugam, K., & Uhler, C. (2024). Identifiability guarantees for causal disentanglement from soft interventions. Advances in Neural Information Processing Systems, 36.\", \"experimental_setup_and_memorization\": \"I am not sure if memorization is the cause behind the observation. In the current experimental setup, the DGMs seem to learn exactly what the training data distribution implies. Unless there are additional constraints (disentanglement, causal or SD) there is no reason for the DGM to necessarily learn the causal mechanism. This is known as the `shortcut problem` but it is not a form of memorization.\", \"questions\": \"See above for the main clarifications I seek, a minor question is:\\n- Why do you call the evaluation classifier, oracle? As far as I understood, it is a trained classifier on the uniform dataset, while very good, it does not know the ground truth class of any data from the true distribution.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper discusses difficulties of the generative models like VAEs, GANs, and DMs in learning rare generative factors. The paper designs a framework to systematically study the learning of rare generative factors in generative models and concluded that GANs and DMs exhibit a stronger tendency towards memorization of rare generative factors compared to VAEs. It also demonstrates that regularization techniques, such as spectral decoupling, can mitigate this memorization tendency to some extent.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe paper designs a novel framework to systematically study the learning of rare generative factors in generative models.\\n2.\\tThe paper proposed spectral decoupling to mitigate this memorization tendency of generative models.\", \"weaknesses\": \"1.\\tThe motivation is not clear. In the introduction section, it is only mentioned that the purpose of this paper is to examine whether generative models can derive rare generative factors, but a clear motivation is lacking. For example, what is the purpose of deriving rare generative factors? What negative impacts might arise if rare generative factors are not effectively learned?\\n2.\\tGenerative models typically learn latent representations automatically, however, this paper manually defines Generative Factors, and all experiments are based on these predefined factors. This approach may create a discrepancy with the stated objective in line 92: \\u201cOur work provides valuable insights into the limitations of current generative models in learning robust, transferable representations from imbalanced datasets, opening new avenues for improving their generalization capabilities.\\u201d\\n3.\\tThe experiments are not consistent with the examples. In these examples, the rare factor has a strong relationship with the label. However, in the dataset D_u, the numbers of samples with two rare factors are the same. From my point of view, it is more like an imbalanced classification problem, while the authors experiments did not take it into account. \\n4.\\tp>0.05 only means that one cannot reject the null hypothesis, and is not strong evidence that the model has effectively learned the model. \\n5.\\tThe paper lacks a detailed explanation of how Spectral Decoupling mitigates this memorization tendency\\n6.\\tSome parts of the paper are verbose and convoluted, making it difficult to understand.\", \"questions\": \"Please see weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper answers the question of whether generative models learn rare generative factors (RGFs), which are latent variables whose frequency is highly skewed in the real world but play an important role in the generative process. The ability to capture RGFs is crucial for many real world tasks e.g. successfully diagnosing Alzheimer's disease in younger patients.\\n\\nTo test this hypothesis, the authors train and test three different generative models, namely a generative adversarial network (GAN), a variational autoencoder (VAE) and a diffusion model (DM) on two different types of datasets. One dataset has the generative factor uniformly distributed across all training instances, and the second dataset has the RGF concentrated to a singular class to simulate the most extreme case of RGFs. Classifiers are trained to discern the RGFs from samples generated by the learned models, and a statistical test is applied to determine whether the generative models have successfully learned the RGF, or instead memorized it.\\n\\nThe results show that generative models are capable of learning RGFs, but GANs and DMs are particularly biased towards memorization than VAEs, highlighting the nuances in how various generative models approach learning of RGFs. The authors then attempt to understand and potentially mitigate RGF memorization in GANs, noting that GANs have the greatest tendency for memorization possibly due to the adversarial discriminator. The authors suggest that the discriminator learns a spurious correlation between the RGF and the training instance, which the authors note is reminiscent of the \\\"gradient starvation\\\" phenomenon. To that end, the authors train a GAN using spectral decoupling, which prevents gradient starvation, and notice that it mitigates RGF memorization to a certain degree.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The subject is well-defined and presentation of the paper is good. There are numerous examples throughout the paper that describe RGFs and their significance from a real-world perspective. The experiments were well-defined and rigorously designed, making the results empirically sound. Overall, the paper was easy to follow from start to finish and the significance of the subject easy to capture.\", \"weaknesses\": \"The paper notably lacks novelty; learning of rare-generative factors is small part of the much larger problem of effectively capturing a data distribution by generative models. More specifically, generative models have difficulty capturing all the modes of the data distribution while also assigning high probability to areas of low probability in the distribution's manifold [1, 2, 3, 4].\\n\\nAdditionally, this topic has been explored in several other papers, with [5] in particular coming to similar conclusions of GANs assigning spurious correlations between generative factors, albeit from a different motivation. In a similar vein, the explanation for why GANs failed in particular were mainly empirical and post-hoc in nature, which I believe makes the results rather weak.\\n\\nReferences\\n[1] Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows, 2016. URL https://arxiv.org/abs/1505.05770.\\n\\n[2] Bin Dai and David Wipf. Diagnosing and enhancing vae models, 2019. URL https://arxiv.org/abs/1903.05789.\\n\\n[3] Partha Ghosh, Mehdi S. M. Sajjadi, Antonio Vergari, Michael Black, and Bernhard Scholkopf. From variational to deterministic autoencoders, 2020. URL https://arxiv.org/abs/1903.12436.\\n\\n[4] Danilo Jimenez Rezende and Fabio Viola. Taming vaes, 2018. URL https://arxiv.org/ abs/1810.00597.\\n\\n[5] Sergio Garrido, Stanislav S. Borysov, Francisco C. Pereira, & Jeppe Rich. Prediction of rare feature combinations in population synthesis: Application of deep generative modelling. 2020. Transportation Research Part C: Emerging Technologies, 120, 102787.\", \"questions\": \"My suggestion to the authors is to discuss the fundamental objectives of the various types of generative models and how they relate to the RGF memorization behaviour on a theoretical basis.\\n\\nAs an example; the KL divergence objective in VAEs is zero-forcing, and therefore will underestimate the support of the true data distribution. In a \\\"dumb\\\" Gaussian latent VAE, the posterior may fail to assign probability to certain low-probability areas in the latent manifold, preventing VAEs from generating rare-but-valid combinations of data. It may also help if the authors could train a VAE with normalizing flows (see [1] in \\\"Weaknesses\\\") to see if the RGF memorization problem is mitigated further, putting theory into practice.\\n\\nWithout needing to go too in depth, a basic explanation of the models' respective objectives and how they relate specifically to learning of RGFs will greatly improve the meaningfulness of the contributions, and by extension the novelty.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
EeqlkPpaV8
The adaptive complexity of parallelized log-concave sampling
[ "Huanjian Zhou", "Baoxiang Wang", "Masashi Sugiyama" ]
In large-data applications, such as the inference process of diffusion models, it is desirable to design sampling algorithms with a high degree of parallelization. In this work, we study the adaptive complexity of sampling, which is the minimum number of sequential rounds required to achieve sampling given polynomially many queries executed in parallel at each round. For unconstrained sampling, we examine distributions that are log-smooth or log-Lipschitz and log strongly or non-strongly concave. We show that an almost linear iteration algorithm cannot return a sample with a specific exponentially small error under total variation distance. For box-constrained sampling, we show that an almost linear iteration algorithm cannot return a sample with sup-polynomially small error under total variation distance for log-concave distributions. Our proof relies upon novel analysis with the characterization of the output for the hardness potentials based on the chain-like structure with random partition and classical smoothing techniques.
[ "sampling", "adaptive complexity", "computational statistics" ]
Accept (Poster)
https://openreview.net/pdf?id=EeqlkPpaV8
https://openreview.net/forum?id=EeqlkPpaV8
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yDScOM1XVG", "wvqGgU9716", "sHULgvAXeY", "jBFGR5qqxU", "e5vP6z38Rx", "aHK0vjQTko", "ZBfalaDWmu", "XVzhNgUfxi", "SXZFeCmG8y", "Ot82Nwtn5Y", "NyCG6htTAM", "NFDJsV0Zkm", "HOY7Eyo2lW", "H0tEoYIklx", "CsLDlx1d9G", "8JeBXtu9QU", "71AjdJwDkT", "11pQvmRkBW" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "meta_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1731863776165, 1732616295862, 1731335440774, 1732725761858, 1731863707295, 1730523121391, 1732725691401, 1731863836324, 1732444993653, 1733194182229, 1732644558224, 1737523728491, 1730687172805, 1731863579369, 1734815767684, 1732878322338, 1730865579006, 1732725202497 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5845/Authors" ], [ "ICLR.cc/2025/Conference/Submission5845/Reviewer_niWg" ], [ "ICLR.cc/2025/Conference/Submission5845/Reviewer_niWg" ], [ "ICLR.cc/2025/Conference/Submission5845/Reviewer_MYcM" ], [ "ICLR.cc/2025/Conference/Submission5845/Authors" ], [ "ICLR.cc/2025/Conference/Submission5845/Reviewer_nyUR" ], [ "ICLR.cc/2025/Conference/Submission5845/Authors" ], [ "ICLR.cc/2025/Conference/Submission5845/Authors" ], [ "ICLR.cc/2025/Conference/Submission5845/Authors" ], [ "ICLR.cc/2025/Conference/Submission5845/Reviewer_nyUR" ], [ "ICLR.cc/2025/Conference/Submission5845/Reviewer_VDWB" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5845/Reviewer_VDWB" ], [ "ICLR.cc/2025/Conference/Submission5845/Authors" ], [ "ICLR.cc/2025/Conference/Submission5845/Area_Chair_ZGf9" ], [ "ICLR.cc/2025/Conference/Submission5845/Authors" ], [ "ICLR.cc/2025/Conference/Submission5845/Reviewer_MYcM" ], [ "ICLR.cc/2025/Conference/Submission5845/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your comprehensive review and valuable feedback. Below, we address the specific points raised:\\n\\n**\\\"...improve over previous works which show lower bounds... comparison to previous works on lower bounds... clarify in what settings/regimes they improve over lower bounds from prior works...do they improve on previous lower bounds for first-order oracles...\\\"**\\n\\n\\nWe respectfully disagree with this point, as our work establishes **first lower bounds** for parallel log-concave sampling. Prior works primarily focused on upper bounds, and no previous results have addressed lower bounds in these settings. \\n\\n\\nWhile we understand that the reviewer might have assumed the existence of prior lower bounds due to the parallel developments in related fields or query complexity of log-concave sampling, these are not the case for the specific problem of adaptive parallel sampling. Therefore, a direct comparison is technically not available. We will revise the manuscript to explicitly highlight that these are the first lower bounds for the settings we consider.\\n\\n\\n**\\\"... intuitive discussion of the difference between adaptive complexity vs. query complexity...\\\"**\\n\\nThank you for this suggestion. Adaptive complexity measures the number of interaction rounds needed in a parallel or distributed setting, whereas query complexity refers to the total number of oracle calls. This distinction is critical because reducing adaptive complexity can significantly improve runtime in parallel systems, even if the total query complexity remains unchanged. We will add a more intuitive explanation of this distinction in the manuscript.\\n\\n**\\\"do they improve on previous lower bounds for first-order oracles, or is the improvement only for zeroth-order oracles?\\\"**\\n\\nSince this work establishes the first lower bounds for parallel sampling in both zeroth-order and first-order oracle settings, the notion of \\u201cimprovement\\u201d over prior lower bounds is not applicable here.\"}", "{\"comment\": \"Thank you for letting me know about the other paper submitted to ICLR. Given this information, I have increased my score to 5. I would not be against acceptance, if the clarity of the writeup is significantly improved and discussion on tightness (using this other ICLR submission) is added to it.\"}", "{\"summary\": \"This paper provides lower bounds for the problem of adaptive sampling from a distribution over $\\\\mathbb{R}^d$ with probability density functions proportional to $exp(-f)$ for some smooth, Lipschitz or convex function $f:\\\\mathbb{R}^d\\\\rightarrow\\\\mathbb{R}$. The algorithm can have access to a 0-th order oracle (i.e can query the values of $f$) that can also be translated to gradient queries with a polynomial blowup in the number of queries. Before making each query, the algorithm can also see the replies of the oracle to the previous queries. The lower bounds are also extended to box-constrained sampling. In the latter, the total variation distance between the true and the sampled distribution (sampling accuracy) is inverse polynomial in the dimension, while in the former case, it is inverse exponential. The lower bounds are linear on the dimension $d$ up to logarithmic factors, while the best known upper bounds are higher degree polynomial depending on the function $f$.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"First attempt for a lower bound on adaptive sampling applicable to a wide range of distributions.\", \"weaknesses\": \"The paper only has negative results (lower bounds), which are not tight for any parameter regime.\\nThe writeup could also be improved in terms of typos/grammar as well as clarity of the presentation (especially in section 3).\", \"minor_comments\": \"\", \"line_15\": \"\\u201cminimal\\u201d->\\u201dminimum\\u201d\", \"line_21_and_100\": \"\\u201csup-\\u201d->\\u201dsub-\\u201d\", \"line_21\": \"\\u201csmall accuracy\\u201d sounds weird (because it means \\u201chigh accuracy\\u201d here). Maybe \\u201csmall error\\u201d would be better.\", \"line_85\": \"Mention that c<1.\", \"section_3\": \"There are multiple occasions where you are correctly describing a reduction FROM hypothesis testing TO sampling, which is indeed the way to translate a hypothesis testing lower bound to a samling lower bound. However, you incorrectly claim the reduction is the other way around, which is a bit confusing. (e.g see Lines 225-226, 235,269,274).\", \"line_267\": \"\\u201ctotal variance\\u201d->\\u201dtotal variation\\u201d\", \"questions\": \"Can you comment on the possibility to close the gap between upper and lower bounds in some parameter regime?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for answering my questions. I maintain my score.\"}", "{\"comment\": \"We appreciate the reviewer\\u2019s thoughtful and constructive feedback, as well as the recommendation for acceptance. Below, we address the specific questions and suggestions raised:\\n\\n**\\\"...nice to see some results for diffusion models...Can your results be extended to diffusion models in a direct way?..\\\"**\\n\\nThank you for this suggestion. While our current results focus on log-concave sampling, the connection to diffusion models is indeed intriguing and highly relevant for algorithm design. Specifically, many log-concave sampling methods are based on discretizing a dynamic that converges to the target distribution. Similarly, the inference process in diffusion models involves discretizing dynamics discribed by stochastic differential equations or probability flow ordinary differential equations for the reverse process.\\n\\nHowever, the sampling methods used in diffusion models are not solely based on dynamics, which differentiates the lower-bound analysis. To establish lower bounds for diffusion models, it may be useful to leverage tools about information-based complexity in the scientific computation community, which considers the complexity of discretizing differential equations.\\n\\nIn the revised manuscript, we will add a discussion about the future potential of extending our lower-bound techniques to diffusion models. Additionally, we will cite recent works on parallel sampling for diffusion models ([1], [2], [3]) to better contextualize our contributions and highlight their relevance to practitioners.\\n\\n**\\\"What can you say about the low-accuracy regime?\\\"**\\n\\nIn this study, we concentrate on scenarios involving high dimensions and very high accuracy. Extending our analysis to broader accuracy ranges presents significant challenges. Specifically, as outlined in Lemma 3.3 of our study, the set of points that remain unreachable by time $t$ is defined as ${x : |X_i - X_j| \\\\geq t, \\\\forall i, j \\\\in [r+1] \\\\text{ and } i,j \\\\geq t+1}$. Accurately estimating the proportion of such an unreachable set presents significant difficulties. Furthemore, constructing an unreachable set with a large proportion of density using a polynomial number of queries is particularly challenging. We view this as an important direction for future research and will include this discussion in the revised manuscript to highlight the open questions related to the low-accuracy regime.\"}", "{\"summary\": \"This paper studies the round complexity of sampling from log-concave distributions $\\\\propto e^{-f}$ in $\\\\mathbb{R}^d$ given (zeroeth order) query access to $f$. For unconstrained sampling from strongly log-concave and log-smooth densities, the present work shows that achieving target accuracy which is exponentially small in $d$ requires $\\\\tilde{\\\\Omega}(d)$ many rounds. They also extend the lower bound in this paper to give an $\\\\tilde{\\\\Omega}(d)$ lower bound for box-constrained sampling of log-concave and log-smooth/Lipschitz densities to inverse super-polynomial error. These lower bounds are within a polynomial of the right answer, e.g. in the unconstrained setting Anari et al. previously gave a parallel algorithm with round complexity $O(\\\\log^2(d/\\\\epsilon))$, which is $O(d^2)$ for exponentially small $\\\\epsilon$. While the query complexity of log-concave sampling has been studied previously for low-dimensional distributions and for sampling Gaussians to constant accuracy, this paper appears to be the first to study the orthogonal question of getting lower bounds in the high-accuracy regime.\\n\\nAt a high level, the lower bound is based on constructing a certain potential $f$ which is a smoothing of a certain function which, for bounded-norm inputs, behaves like the following piecewise linear function. It is parametrized by an unknown partition of the coordinates into blocks of polylogarithmic size, and given an input $\\\\mathbf{x}$, the function adds to the weight of $\\\\mathbf{x}$ on the first set in the partition the following sum. It computes the imbalance between the weight of $\\\\mathbf{x}$ for each consecutive pair of consecutive sets in the partition and, if the imbalance exceeds some threshold $t$ for that pair, it adds to the running sum how much that imbalance exceeds $t$. So for inputs on which the weights of $\\\\mathbf{x}$ over the sets in the partition are roughly equal, this potential outputs something close to the weight of $\\\\mathbf{x}$ on just the first set of the partition. \\n\\nThis potential is designed in such a way that in the first $\\\\tau$ rounds of any parallel algorithm, if one thinks of the subsets in the partition as randomly chosen in sequence, then with high probability over the randomness of the subsequent subsets of the partition, the queries up to that point reveal no information about what the subsequent subsets are. This effectively forces the algorithm to run for $\\\\tilde{\\\\Omega}(d)$ rounds before it can query parts of space on which the potential is non-negligibly different from the weight of $\\\\mathbf{x}$ on the first subset of the partition.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Deriving good lower bounds on query complexity is a central question in the area of log-concave sampling, and this paper makes solid progress towards understanding the optimal parallel query complexity in a natural setting, namely the high-accuracy regime. The lower bound construction is a nice blend between the techniques used to show adaptive complexity bounds in submodular optimization and the information-theoretic approach used in prior work of Chewi et al on query lower bounds for sampling. Additionally, it is interesting that they can derive a linear-in-$d$ lower bound even in the relatively low-accuracy regime in the box-constrained setting.\", \"weaknesses\": \"While these results are only relevant in the setting where the dimension $d$ is quite large, it is not clear that the lower bounds really get at the fundamental question of why there should be a dimension dependence in the query complexity of log-concave sampling. In particular, the fact that the authors get a near-linear lower bound for the unconstrained sampling setting feels more like a by-product of the fact that they are working in the high-accuracy regime than of the fact that the distribution lives in high dimensions (in particular, their lower bound could be consistent with a world in which the true round complexity is $\\\\tilde{\\\\Theta}(\\\\log(1/\\\\epsilon))$). Admittedly the near-linear lower bound in the box-constrained setting applies in the regime of only $1/d^{\\\\omega(1)}$ error, but the argument seems almost verbatim the same as the argument in the unconstrained setting, suggesting that the fact they get such a lower bound in the relatively low error regime is more a quirk of the constrained setting than something fundamental about dimension dependence for log-concave sampling.\", \"questions\": [\"The writing could be clarified in various places, e.g:\", \"In Lemma 4.3, the terminology \\\"up to addictive error $O(t/2)$\\\" it is only clarified later in the proof to mean that every *coordinate* is off by at most $O(t/2)$ (also why is there a big-O with a factor of 1/2?)\", \"The definition of $X_i(\\\\mathbf{x}')$ seems off, as the subscript $i$ is then used to index over $i\\\\in[\\\\ell]$ in $\\\\cup_{i\\\\in[\\\\ell]} P_i$\", \"The terminology \\\"concentration of conditioned Bernoullis\\\" is a little confusing as it is unclear what is being conditioned on. A more standard phrasing could be \\\"concentration of linear functions over the Boolean slice\", \"When \\\"partition $\\\\mathcal{P} = (P_1,\\\\ldots,P_{r+1})$ over the ground set\\\" was introduced, it was not clear to me that the ground set was referring to the set of coordinates of the Euclidean space in which the target distribution lives\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you again for your review and the follow-up discussion. Following your suggestion, we have updated the [manuscript](https://openreview.net/pdf?id=EeqlkPpaV8) to include the discussion on the matching upper/lower bounds. We have also spent time going through the manuscript to improve the presentation. There are around 30 minor changes made to polish the manuscript (highlighted in blue color). We would like to request you to re-evaluate the work given the updated manuscript, and we are more than happy to respond to any further questions and discussions.\"}", "{\"comment\": \"We thank the reviewer for their thoughtful and constructive feedback, as well as for recognizing our solid progress towards understanding the optimal parallel query complexity, and a nice blend between the techniques. Below, we respond to the key points raised in the review.\\n\\n**\\\"...by-product of the fact that they are working in the high-accuracy regime than of the fact that the distribution lives in high dimensions...\\\"**\\n\\nWe totally agree that, due to the focus on high accuracy and high dimension, our construction cannot fully decouple the dependence on dimensionality and accuracy. For **query complexity**, in particular, direct application of our construction could lead to high-accuracy hardness even with O(1) queries. \\n\\nHowever, our focus is on **adaptive complexity**, where the challenge lies in hiding information while using a polynomial number of queries per round. For this, high accuracy appears necessary to establish hardness. Regarding dimensionality, we leverage concentration phenomena in high dimensions to effectively hide information. Exploring lower-accuracy regimes for both query and adaptive complexity remains an interesting and open direction for future work.\\n\\n**\\\"The writing could be clarified in various places...\\\"**\\n\\nThank you for pointing out areas for improvement in clarity. We have addressed the specific issues in the revised version, including:\\n\\n- Clarifying \\\"up to additive error \\\\eta\\\" in Lemma 4.3 and remove $1/2$.\\n- We will replace $X_i(x)$ with $X^i(x)$ throughout the manuscript\\n- Rephrasing \\\"concentration of conditioned Bernoullis\\\" to standard terminology like \\\"concentration of linear functions over the Boolean slice.\\\"\\n- Clarifying that the\\\"\\u201cground set of partition\\\" refers to the coordinates of the Euclidean space.\"}", "{\"comment\": \"Thank you again for reviewing the manuscript. In the author rebuttal, we have responded to the tightness issue by justifying the tightness of our lower bounds. This seems to be the main (and the only, apart from minor issues) concern raised by the reviewer. As the author-reviewer discussion period is approaching its end, we would like to know if you have any additional questions or concerns. We are happy to provide our response if so. At the moment, the rating of 3 seems very low without enough justification.\"}", "{\"title\": \"Thanks for the reply!\", \"comment\": \"I remain positive about this work and will keep my score.\"}", "{\"comment\": \"Thank you for the clarification. As your improvements are exclusively for parallelized samplers, I think that it would be good to change the title to \\\"The adaptive complexity of parallelized log-concave sampling\\\". This should clear up the confusion for the reader.\\n\\nI have raised my score in light of the clarification, with the expectation that the title should be changed to highlight the parallel nature of the work.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper shows lower bounds for a number of different log-concave sampling problems. Specifically, they show a lower bound of $\\\\tilde{\\\\Omega}(d)$ objective function evaluations for the problem of sampling from a unconstrained distribution. They also show a lower bound of $\\\\\\\\tilde{\\\\Omega}(d)$ objective function evaluations for the problem of sampling from a \\u201cbox-constrained\\u201d logconcave distribution constrained to the unit cube. In addition to applying to any unconstrained or box-constrained distribution, their lower bounds also handle the special cases when the log-density is smooth and Lipschitz. However, they do not obtain lower bounds for the special case where the log-density is strongly convex.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper improves over previous works which show lower bounds for the problem of sampling from a logconcave distribution.\\n\\nThe comparison to previous works for upper bounds is very clearly stated. However, the comparison to previous works on lower bounds is less clear, as there are different situations where such lower bounds apply (e.g., different accuracy levels) (see weaknesses below)\", \"weaknesses\": \"The writing in the paper could be improved in certain aspects. In particular, the comparison to prior works is somewhat confusing and could be made more clear.\\n\\nAs mentioned above, the comparison to previous works for upper bounds is very clearly stated, with a clear table\\u2014which is good. However, the comparison to previous works on lower bounds is less clear, as there are different situations where such lower bounds apply (e.g., different accuracy levels). It would be good to include a side-by-side comparison with previous works on lower bounds, perhaps as an additional table in the appendix if there is not enough room in the main body of the paper.\\n\\nAdditionally, it would be helpful to have a more intuitive discussion of the difference between adaptive complexity vs. query complexity. I am more familiar with query complexity, but adaptive complexity seems to be a less common term and it would be good to highlight the differences between the two concepts, and to explain with respect to which concepts the authors improve on previous lower bounds.\", \"questions\": \"Could the authors clarify in what settings/regimes they improve over lower bounds from prior works? Under what assumptions on the objective function, and for what required accuracy levels were the improvements, and by how much was the improvement? In what regimes were lower bounds previously available/not available? In what settings were they available, but improved by the authors? In what regimes were they not improved?\\n\\nAlso, I understand that the authors results apply both to zeroth-order oracle and first-order oracles, which is good. However, do they improve on previous lower bounds for first-order oracles, or is the improvement only for zeroth-order oracles?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We are grateful that the reviewer acknowledges the novelty of our work as the **first lower bound** for adaptive sampling applicable to a wide range of distributions.\\n\\n**\\\"...only has negative results (lower bounds), which are not tight...\\\"**\\n\\nWe respectfully disagree with the claim that our results are not tight for any parameter regime. In fact, our results are point-wise tight when compared to an anonymous work recently submitted to ICLR, which provides corresponding upper bounds for adaptive sampling [1]. \\n\\n[1] https://openreview.net/forum?id=6Gb7VfTKY7&nesting=2&sort=date-desc\\n\\n**\\\"The writeup could also be improved...\\\"**\\n\\nWe apologize for any lack of clarity or typographical issues in the current draft. We will carefully proofread the manuscript to address the specific issues raised by the reviewer. \\n\\nFor Section 3, we will revise and restructure the explanations to ensure clarity, especially regarding the direction of the reductions.\\n\\nFor the suggestion to replace \\\"sup-\\\" with \\\"sub-\\\" in Lines 21 and 100, we would like to point out that this is not a typo. The term \\u201csup-polynomially small accuracy\\u201d is technically precise and refers to quantities smaller than any polynomial order of $d$.\\n\\n**\\\"... the possibility to close the gap between upper and lower bounds...\\\"**\\n\\nThank you for this insightful question. Compared with the anonymous work [1], our lower bound highlights the pivotal role of $\\\\log(1/\\\\epsilon)$, which is essential for understanding adaptive sampling complexity. However, the dependence on d remains an open question and likely varies across distribution classes or oracle assumptions. Exploring sharper dimension dependent bounds is a key direction for future work, and we will elaborate on these open questions in the revised manuscript.\"}", "{\"metareview\": \"This paper considers the adaptive complexity of sampling, i.e. the minimum number of rounds to sample from a target with polynomially many queries in parallel at each round. Authors focus on total variation distance. In the unconstrained case, they study log-smooth, log-Lipschitz and log strongly or non-strongly concave distributions and provide negative results that prove a linear complexity algorithm cannot return a sample with exponentially small error. In the constrained case, authors show that the same with sup-polynomially small error, for log-concave distributions.\\n\\n\\nThis paper was reviewed by four expert reviewers the following Scores/Confidence: 6/3, 8/2, 8/3, 5/3. I think the paper is studying an interesting topic and the results are relevant to ICLR community. The following concerns were brought up by the reviewers:\\n\\n- The paper has tight lower bounds. But I think authors should include a more detailed discussion on upper bounds in different settings as well. Authors should also clarify in their paper, point-wise tightness when compared to the work they cite in the rebuttal.\\n\\n- several typos and ambiguous statements were pointed by the reviewers. These should be carefully addressed.\\n\\n\\nAuthors should carefully go over reviewers' suggestions and address any remaining concerns in their final revision. Based on the reviewers' suggestion, as well as my own assessment of the paper, I recommend including this paper to the ICLR 2025 program.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer questions are thoroughly answered by the authors.\"}", "{\"comment\": \"Thank you again for your review and the discussions. We believe our presentation has been improved by discussing the matching upper bound and by polishing the manuscript. As the author-reviewer discussion period is approaching its end, would you mind taking some time to look at the updated manuscript? We are happy to answer any additional questions.\"}", "{\"summary\": \"This paper studies the problem of parallel sampling, and shows a lower bound of $\\\\widetilde O(d)$ for the number of parallel iterations necessary in the \\\"high-accuracy\\\" regime for log-concave sampling. The paper also shows a similar result for the box-constrained setting. To achieve these lower bounds, the paper leverages techniques from the optimization literature that have been used to show lower bounds for adaptive algorithms for optimization. The paper also studies some other sampling regimes such as composite sampling, showing similar bounds.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper studies an interesting problem, and shows how to leverage techniques from a different but related area to show lower bounds for parallel sampling. The lower bounds are of interest to both theoreticians and practitioners, since parallel sampling methods for diffusion models have recently been proposed and have gained popularity.\\n\\nGenerally, the paper is well-written and easy to understand. I recommend acceptance\", \"weaknesses\": \"It would be nice to see some results for diffusion models, since it seems like they should naturally follow from your results. It would also be nice to cite the recent works from the diffusion literature on parallel sampling (see [1, 2, 3]). This would make this work far more appealing to practitioners.\\n\\nIt would also be nice to have a thorough description of the barriers in extending your techniques to the low-accuracy regime, and any specific intermediate conjectures towards proving such results.\\n\\n[1] https://arxiv.org/abs/2305.16317\\n\\n[2] https://arxiv.org/abs/2406.00924\\n\\n[3] https://arxiv.org/abs/2405.15986\", \"questions\": \"1) Can your results be extended to diffusion models in a direct way?\\n2) What can you say about the low-accuracy regime?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your feedback. This encourages us a lot. Following your suggestion, we have updated the manuscript, with the title revised to \\\"The adaptive complexity of parallelized log-concave sampling\\\" in the updated PDF. We believe this should clear up the confusion for the reader. We also made minor updates to the language and presentation.\\n\\nWe are more than happy to respond to any further questions and discussions.\"}" ] }
EeDSMy5Ruj
Synthetic Theorem Generation in Lean
[ "Joseph Rotella", "Zhizhen Qin", "Aidan Z.H. Yang", "Brando Miranda", "Mohamed El Amine Seddik", "Jingwei Zuo", "Hakim Hacid", "Leonardo de Moura", "Soonho Kong", "Shi Hu" ]
The application of large language models (LLMs) to theorem proving presents a promising avenue for advancing formal mathematics. Interactive theorem provers, such as Lean, offer a rigorous framework within which these models can assist in or automate proof discovery, grounding their reasoning capabilities in a sound, verifiable formal system. However, the potential of LLMs in this domain is constrained by the limited availability of formal proof corpora for training. To address this limitation, we introduce a synthetic theorem generator capable of producing novel Lean theorems and their corresponding proofs. Our approach employs forward reasoning to synthesize new propositions from premises drawn from existing Lean libraries. We explore candidate reasoning steps using a search strategy that optimizes for diversity of output, apply them in a linear fashion that avoids irrelevant proof steps, and assess their effect by meta-programmatically executing corresponding Lean tactics. These methods enable the generation of an arbitrary number of new theorems and proofs across various mathematical domains, using common Lean proof tactics while ensuring the correctness of generated theorems by construction. We demonstrate the efficacy of the generated theorems and training data by fine-tuning models on synthetic theorems and evaluating them on the miniF2F-test benchmark. Our results show improvements in theorem-proving capabilities, with accuracy increasing from 37.3% to 38.5% for the Falcon2-11B model trained solely on Mathlib, and from 38.1% to 39.3% for the same model trained on a mix of rich datasets. These improvements highlight the value of our diverse synthetic data in augmenting limited existing corpora of formal proofs, providing complementary information that enhances LLMs' performance on theorem-proving tasks even when combined with other datasets.
[ "theorem proving", "large language model", "synthetic data generation", "Lean" ]
Reject
https://openreview.net/pdf?id=EeDSMy5Ruj
https://openreview.net/forum?id=EeDSMy5Ruj
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uS116huveD", "mTChaZlH4T", "fjIZn8YRbP", "cuGQ9hX8Cc", "bpFKjUypKQ", "ZtiqWN7rHR", "Wyvt6F05Qn", "UhqyIy13G0", "U6kxYN4O4H", "TVYpir5Nw8", "SfMUoYhoHW", "Nb4aFLPbrq", "L4sNNCmWNY", "Gv8DQq45CU", "GHr3EyFUL3", "Doup9AFgdf", "CQbD8l2mG3", "91UA0K0gZe", "8EUjbnLhy8", "46nYzvplKf", "1gk3XeU8vr" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "decision", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732557536460, 1732557750570, 1732557814610, 1732557466098, 1732558054747, 1732557563384, 1732557408026, 1733176204526, 1732603645092, 1730614593663, 1734846986403, 1732557369065, 1732815351118, 1737524145146, 1730526062872, 1730669804789, 1729701341918, 1732558088358, 1732809246015, 1732557714379, 1732751891170 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11773/Authors" ], [ "ICLR.cc/2025/Conference/Submission11773/Authors" ], [ "ICLR.cc/2025/Conference/Submission11773/Authors" ], [ "ICLR.cc/2025/Conference/Submission11773/Authors" ], [ "ICLR.cc/2025/Conference/Submission11773/Authors" ], [ "ICLR.cc/2025/Conference/Submission11773/Authors" ], [ "ICLR.cc/2025/Conference/Submission11773/Authors" ], [ "ICLR.cc/2025/Conference/Submission11773/Reviewer_y32e" ], [ "ICLR.cc/2025/Conference/Submission11773/Reviewer_KjJi" ], [ "ICLR.cc/2025/Conference/Submission11773/Reviewer_tn5n" ], [ "ICLR.cc/2025/Conference/Submission11773/Area_Chair_6qqr" ], [ "ICLR.cc/2025/Conference/Submission11773/Authors" ], [ "ICLR.cc/2025/Conference/Submission11773/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11773/Reviewer_incY" ], [ "ICLR.cc/2025/Conference/Submission11773/Reviewer_y32e" ], [ "ICLR.cc/2025/Conference/Submission11773/Reviewer_KjJi" ], [ "ICLR.cc/2025/Conference/Submission11773/Authors" ], [ "ICLR.cc/2025/Conference/Submission11773/Authors" ], [ "ICLR.cc/2025/Conference/Submission11773/Authors" ], [ "ICLR.cc/2025/Conference/Submission11773/Reviewer_incY" ] ], "structured_content_str": [ "{\"comment\": \"> **W2.** Does this imply that the generated theorems will also only employ these 5 tactics?\\n\\nThe generator is not architecturally limited to only employing the five tactics mentioned, as the system is more flexible and extensible than it might initially appear. Currently, our generator employs five forward-reasoning tactics (`have`, `rewrite`, `simp_arith`, `norm_num1`, and `ring_nf`) in addition to two optional proof-minimization tactics (`omega` and `aesop`), the behavior of each of which we describe in \\u00a74.3. However, there are two key ways in which our generator is more general than these figures might suggest:\\n\\n* Tactics can take arguments that significantly broaden the types of reasoning they can apply. For instance, although `have` is technically a single tactic, it is capable of applying any of the more than 150,000 lemmas in the Mathlib library, allowing it to invoke a wide range of inference strategies. As an example of the wide range of reasoning strategies a single tactic can support, using the `have` tactic with the lemma `Nat.lcm_pos` allows Lean to infer that `lcm m n` is positive from the facts that `m` and `n` are (a basic number-theoretic inference), while using the same `have` tactic with the lemma `Monotone.ae_differentiableAt` allows Lean to infer that a function `f` is differentiable Lebesgue-almost everywhere from the fact that `f` is monotone (a nontrivial result from analysis). \\n\\n* Our generator's infrastructure does not crucially depend upon the behavior of these particular tactics and could easily be extended with additional forward-reasoning tactics. There are currently a limited number of forward-reasoning tactics available in Mathlib, of which we aimed to select a robust subset with minimal overlap in functionality. With Mathlib's current inventory of tactics, we determined that the computational cost of adding any of the remaining forward-reasoning-capable tactics (e.g., `norm_cast0`) was not justified by the limited number of substantively distinct reasoning strategies such tactics would afford, especially given our desire to generate a large synthetic corpus and the diversity afforded by the tactics already employed. For use cases that place a premium on tactic diversity, however, extensibility with additional tactics is possible, especially if more forward-reasoning tactics or forward-reasoning modes for existing tactics are added to Mathlib.\\n\\n> **W3.** \\u2026the graphs are not particularly illustrative and lack detail. It would be beneficial to include a concrete, illustrative example when describing the method, as this would make the paper much easier to understand.\\n\\nWe have added a figure that includes a simple concrete example of the method of theorem synthesis. We have also added concrete examples of synthetically generated theorems to Appendix F, where we also give an account of the process by which they were generated to supplement the graphical illustrations.\\n\\n> **Q1.** Is it true that the base tactics used for theorem generation (excluding variants with specific parameters) consist of only five (or a slightly larger number)? Will the final generated theorems be limited to tactics from this set?\\n\\nPlease refer to our response to item W2 above.\\n\\n> **Q2.** Could you provide some concrete examples of the generated theorems? A detailed process showing how one of these theorems is created would be particularly helpful.\\n\\nPlease refer to our response to W3 and the appendix we reference there.\\n\\n> **Q3.** In lines 73\\u201376, references for \\\"Some\\\" and \\\"Other techniques\\\" should be added.\\n\\n[1,2,3,4] are works that generate proofs by performing proof search from conjectures from random sampling or LLM-based methods. [5,6,7,8,9] are works that generate new theorems and their proofs by applying inference rules to reason forward from existing theorems. We have added these references in the updated paper.\"}", "{\"comment\": \"> **W2.** The paper does not provide specific metrics or validation methods to assess the quality or relevance of the generated synthetic theorems.\\n\\nWe appreciate the reviewer's concern about quality assessment of the synthetic theorems. While we acknowledge that developing comprehensive quality metrics for theorem generation remains an open research challenge, we provide several concrete measures to assess our approach:\\n\\nFirst, the coverage table presented in Appendix H serves as a quality indicator. This table demonstrates how our synthetic theorems span different mathematical fields, providing a quantitative measure of the diversity and relevance of the generated content.\\n\\nSecond, we compare the complexity of proofs that M1 can generate with and without synthetic data fine-tuning. Our results show that models trained with synthetic data are capable of producing more complex proofs. This serves as a proxy measure for the quality and utility of the synthetic training data.\\n\\nThird, one metric we already recorded in our initial experiments\\u2014as displayed in \\u00a75.1\\u2014is the mean length of the output proofs. Since each line corresponds to an additional reasoning step required to deduce the conclusion from the assumptions, this metric serves as a proxy for the complexity/difficulty of the proof. Because the desired range of generated proof lengths is configurable, the generator is capable of creating proofs of effectively arbitrary complexity with respect to this metric.\\n\\nWe agree that developing more sophisticated quality metrics for synthetic theorems is an important direction for future research. However, this represents a broader challenge in the field of automated theorem proving that extends beyond the scope of our current work. Our focus in this paper is on demonstrating the practical utility of our synthetic approach through measurable improvements in model performance.\\n\\n> **W3.** The generation process requires significant computational resources\\n\\nWhile our experimental setup did require significant computational resources, these can be reduced by modifying the generator\\u2019s configuration, such as using fewer premises or disabling re-elaboration during the verification phase. Additionally, the process of proof search frequently yields diminishing returns: initially, easy-to-synthesize proofs are rapidly produced; later, as the generator is left to find more challenging proofs (e.g., ones where premise selection yields many \\u201cnear-miss\\u201d lemmas or in Mathlib modules with large numbers of proof steps that yield few proofs), the rate of proof production slows considerably. We observed this repeatedly in our experiments. Therefore, we anticipate that one could generate a significant fraction of the proofs we produced without extensive computational resources. Moreover, our experimental setup was batched, such that we could not granularly determine whether all vCPUs remained in use for the entire 24 hours; it is possible that certain machines with long-running tasks continued while those that had completed theorem generation sat idle, which would yield an inflated account of our resource utilization.\\n\\n> **Q1.** \\u2026are there any metrics or quality checks in place to evaluate the significance and validity of the generated theorems and proofs beyond their correctness by construction?\\n\\nPlease refer to our response to item W2 above.\\n\\n> **Q2.** Which specific theorems in miniF2F were newly proved by the Falcon2-11B model fine-tuned with synthetic data? These examples could help clarify the types of problems that benefit from synthetic training.\\n\\nThank you for this suggestion. We have provided below a few examples of newly proved problems from minif2f. We have also added Appendix G, which includes full newly-proven theorems and their proofs from a model fine-tuned on synthetic data.\\n\\n```lean\\ntheorem mathd_algebra_263\\n (y : \\u211d)\\n (h\\u2080 : 0 \\u2264 19 + 3 * y)\\n (h\\u2081 : Real.sqrt (19 + 3 * y) = 7) :\\n y = 10\\n\\ntheorem amc12a_2002_p6\\n (n : \\u2115)\\n (h\\u2080 : 0 < n) :\\n \\u2203 m, (m > n \\u2227 \\u2203 p, m * p \\u2264 m + p)\\n\\ntheorem mathd_algebra_113\\n (x : \\u211d) :\\n x^2 - 14 * x + 3 \\u2265 7^2 - 14 * 7 + 3\\n\\ntheorem induction_11div10tonmn1ton\\n (n : \\u2115) :\\n 11 \\u2223 (10^n - (-1 : \\u2124)^n)\\n```\"}", "{\"comment\": \"> **Q3.** Could this approach be modified or enhanced to better align with the needs of SOTA theorem-proving systems, potentially addressing the disparity in miniF2F performance compared to SOTA models? Is there any specific reason to use Falcon2-11B compared to other models?\\n\\nWe believe that our techniques are applicable to a wide range of models that require formal theorem statements and proofs in Lean for training. There is nothing specific to our synthetic generation approach that requires our specific model architecture.\\n\\nOur intent was not to focus on the architecture of the proof-search model itself, but rather on the relative impact of these synthetic data compared to training with organic data alone. Therefore, for our experiments, we opted to use a proof-search architecture representative of many commonly employed in prior work, even if this does not match the precise architectures of the latest SOTA systems. We use the same proof search algorithm as LeanDojo [1], with the only modification being that we implemented a parallel version of it. Since synthetic data did improve theorem-proving performance in this setting, we believe our synthetic generator would also be beneficial in SOTA theorem-proving systems, since, as noted, no aspect of the synthetic data is tied to our particular model configuration.\\n\\nWe opted to use the Falcon2-11B model mainly because of our familiarity with the model and the fact that it is one of several open-sourced SOTA pre-trained LLMs. However, our methods are not specific to the Falcon model, and these same techniques could be leveraged to fine-tune other LLMs as well.\\n\\n> **Q4.** In Appendix B, you assumed a TacticsFor function in Algorithm A that returns a set of applicable theorems at a proof state. How is this function implemented?\\n\\nThe `TacticsFor` function is straightforward and not computationally intensive, which is why we elected not to give explicit pseudocode; however, we have now added additional details to Appendix B to clarify this function\\u2019s behavior. To address your specific concern, it does not iterate over all possible arguments to the selected library lemmas. Instead, in the stream of tactics to apply, we represent `have` tactics simply by the library lemmas that the generator will attempt to apply; the generator only attempts to fill in the arguments to those lemmas when \\\"executing\\\" the tactic (i.e., line 1 of Algorithm 2)\\u2014this is the computationally expensive step in the procedure. \\n\\n[1] Kaiyu Yang, Aidan Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad\\nGodil, Ryan J Prenger, and Animashree Anandkumar. Leandojo: Theorem prov-\\ning with retrieval-augmented language models. In A. Oh, T. Naumann, A. Glober-\\nson, K. Saenko, M. Hardt, and S. Levine (eds.), Advances in Neural Information Pro-\\ncessing Systems, volume 36, pp. 21573\\u201321612. Curran Associates, Inc., 2023.\"}", "{\"comment\": \"We thank the reviewer for their thorough feedback and suggestions. We respond below to the concerns and questions raised:\\n\\n> **W1.** The usefulness of these synthesized theorems is a significant concern.\\n\\nWe acknowledge that the absolute increase in the number of theorems proved is limited in our particular experimental configuration. However, our work more broadly presents an architecture for forward reasoning-based tactic-proof synthesis in Lean using a novel search strategy (\\u00a7\\u00a74.2\\u20133), lightweight tactic-execution/simulation scheme (\\u00a74.1), LLM-based lemma selection (\\u00a74.4), and verification pipeline that accounts for Lean\\u2019s significant notational extensibility (\\u00a74.5). We believe that these broader architectural contributions to theorem synthesis, in conjunction with our experimental findings that show a modest but nonetheless consistent improvement in performance, still provide substantive benefits for synthetic data generation. Indeed, there are several factors unrelated or nonessential to our general architecture that likely impacted our experimental results:\\n\\n* As we elaborate upon in our response to item W2 below, the number of forward-reasoning tactics in Lean/Mathlib is currently limited, which does prevent us from generating certain types of proofs (e.g., structured induction proofs) that might provide useful training examples. While, as we note in our response to W2, our existing tactics do still offer diverse reasoning strategies. The various potential extensions of our tactic inventory we discuss in that response could increase the utility of the synthesized theorems\\u2014using the same architecture we have presented here.\\n\\n* As we note in \\u00a7\\u00a74.4 and 6, improving the model used for premise selection could lead to more relevant theorem selection, producing proofs that demonstrate the effective use of both library lemmas and built-in tactics. While we rely on ReProver because it is trained on a related task and works easily out-of-the-box, it is not\\u2014as we note in \\u00a74.4\\u2014specifically trained for this task and, as a result, may produce less relevant lemma suggestions than a purpose-built LLM. Even so, the positive results we saw even with the ReProver model suggests the efficacy of our general approach, even without the additional benefit of a more powerful premise-selection model.\\n\\n* For our experiments, we opted to use a proof-search architecture representative of techniques commonly employed in prior work rather than the latest SOTA approaches, since our focus was on synthetic theorem generation rather than proof search. (Our approach was a modified version of that used by LeanDojo ReProver [10].) Accordingly, its performance was constrained relative to SOTA works.\"}", "{\"comment\": \"We thank the reviewer for their detailed feedback and suggestions. We respond below to the concerns and questions raised:\\n\\n> **W1.** Lack of Technical Novelty in Synthetic Theorem Generation\\n\\nWe respectfully disagree with the conclusion that our work lacks novelty. We outline the novel aspects of our contribution below. Firstly, we are the first to realize such a system of forward-reasoning-based theorem synthesis in Lean's tactic mode, as also noted by reviewer tn5n. Since the majority of human-authored Lean proofs are written in tactic mode rather than as raw terms, this results in training data that is significantly more relevant to real-world proof tasks. Secondly, our approach is not restricted to a single mathematical domain, but is broadly applicable across the fields of math formalized in Mathlib, which represents a wide swath of modern mathematics. Lastly, we enhance our theorem synthesis procedure with LLM-based premise selection, which allows our tool to select relevant library lemmas across this broad array of mathematical disciplines, improving both efficiency and the relevance of the output theorems.\\n\\n> **W2.** Potential Limited Diversity and Difficulty of Generated Theorems\\n\\nWhile our current generator configuration uses five or seven tactics (depending on whether proof minimization is enabled), it is more extensible and capable of generating more diverse theorems than this may suggest. First, tactics can take arguments that significantly broaden the types of reasoning they can apply. The `have` tactic alone can apply more than 150,000 lemmas in the Mathlib library, which span much of modern mathematics. Secondly, our generator is extensible with additional forward-reasoning tactics\\u2014our architecture does not rely on the specific behavior of the five we selected. The number of forward-reasoning tactics available in Mathlib is currently limited, and we sought to choose a robust subset whose functionality did not overlap, but more forward-reasoning tactics can be easily added to the generator in cases where a premium is placed on the breadth of tactics used.\\n\\nWe also believe that our synthetic theorems do exhibit diversity in several respects. Firstly, we promote diversity in the proofs we generate using the algorithm shown in Figure 3, which ensures that proofs diverge from one another as early as possible. Secondly, we generate theorems using initial proof states from across Mathlib, leading to synthetic theorems that address a wide array of subject areas in mathematics. We have added a table in Appendix H that lists the number of theorems in our synthetic dataset generated using initial states from each top-level Mathlib submodule (e.g., `Algebra`, `Analysis`, `Topology`), illustrating this subject-matter diversity. The theorems can also exhibit user-specified degrees of complexity, since the number of reasoning steps they employ is configurable in the generator and can be set to arbitrarily large values. With regard to your final concern in this item, we have also added examples of synthetically generated theorems in Appendix F.\"}", "{\"comment\": \"> **Q4.** In lines 115\\u2013116, I believe the references \\\"(An et al., 2024; Polu & Sutskever, 2020; Zombori et al., 2021)\\\" are misplaced. These do not pertain to methods used for generating theorem statements.\\n\\nWe appreciate this detailed feedback. Indeed, the citation of Polu & Sutskever here was misplaced, as we only intended to cite this work in the subsequent paragraph; in their \\u00a74.6, those authors discuss procedures for generating random theorem statements in the domains of $n$-digit arithmetic and ring algebra using a forward-reasoning-based approach. We believe that the citations of An et al. and Zombori et al. in their current location are relevant, as each paper discusses (though not as its primary contribution) a method of generating known-true theorem statements within a fixed domain:\\n\\n* An et al., 2024: \\u00a73 describes a procedure for generating theorem statements in intuitionistic propositional logic.\\n* Zombori et al., 2021: \\u00a74 alludes to a procedure by which arithmetic problems (i.e., theorem statements) are randomly generated; the generator itself can be found in the [project repository](https://github.com/atpcurr/atpcurr).\\n\\n[1] Chenyang An, Zhibo Chen, Qihao Ye, Emily First, Letian Peng, Jiayun Zhang, Zihan Wang, Sorin Lerner, and Jingbo Shang. Learn from failure: Fine-tuning llms with trial-and-error data for intuitionistic propositional logic proving. arXiv preprint arXiv:2404.07382, 2024.\\n[2] Huajian Xin, Daya Guo, Zhihong Shao, Zhizhou Ren, Qihao Zhu, Bo Liu, Chong Ruan, Wenda Li, and Xiaodan Liang. Deepseek-prover: Advancing theorem proving in llms through large-scale synthetic data. arXiv preprint arXiv:2405.14333, 2024.\\n[3] Huaiyuan Ying, Zijian Wu, Yihan Geng, Jiayu Wang, Dahua Lin, and Kai Chen. Lean workbook: A large-scale lean problem set formalized from natural language math problems, 2024.\\n[4] Zsolt Zombori, Adri\\u00e1n Csisz\\u00e1rik, Henryk Michalewski, Cezary Kaliszyk, and Josef Urban. Towards finding longer proofs. In Anupam Das and Sara Negri (eds.), Automated Reasoning with Analytic Tableaux and Related Methods, pp. 167\\u2013186, Cham, 2021.\\n[5] Vlad Firoiu, Eser Aygun, Ankit Anand, Zafarali Ahmed, Xavier Glorot, Laurent Orseau, Lei Zhang, Doina Precup, and Shibl Mourad. Training a first-order theorem prover from synthetic data. In International Conference on Learning Representations Workshop on Mathematical Reasoning in General Artificial Intelligence, 2021. \\n[6] Guillaume Lample, Timothee Lacroix, Marie anne Lachaux, Aurelien Rodriguez, Amaury Hayat, Thibaut Lavril, Gabriel Ebner, and Xavier Martinet. Hypertree proof search for neural theorem proving. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022.\\n[7] Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving. arXiv preprint arXiv:2009.03393, 2020\\n[8] Trieu H. Trinh, Yuhuai Wu, Quoc V. Le, He He, and Thang Luong. Solving olympiad geometry without human demonstrations. Nature, 625(7995):476\\u2013482, 2024.\\n[9] Mingzhe Wang and Jia Deng. Learning to prove theorems by learning to generate theorems. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 18146\\u201318157. Curran Associates, Inc., 2020.\\n[10] Kaiyu Yang, Aidan Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad\\nGodil, Ryan J Prenger, and Animashree Anandkumar. Leandojo: Theorem prov-\\ning with retrieval-augmented language models. In A. Oh, T. Naumann, A. Glober-\\nson, K. Saenko, M. Hardt, and S. Levine (eds.), Advances in Neural Information Pro-\\ncessing Systems, volume 36, pp. 21573\\u201321612. Curran Associates, Inc., 2023.\"}", "{\"comment\": \"> **Q1.** I would appreciate some more elaboration on why the 'have' tactic needs a lemma from the library to introduce a new hypothesis.\\n\\nThe `have` tactic in Lean requires a lemma from the library to introduce a new hypothesis because the user must show (by supplying such a lemma) that the newly added hypothesis follows from the existing ones. It is possible that this confusion is arising from the use of the word \\u201chypothesis.\\u201d In Lean, it is conventional to refer to the collection of locally-known facts in one's context as \\\"hypotheses,\\\" regardless of whether they were introduced as an antecedent/assumption of the theorem statement (i.e., the usual informal use of the term \\\"hypothesis\\\") or as a deduced consequence of such assumptions. The hypotheses introduced by the `have` tactic are of the latter form: `have` is used to add to the context a new \\\"locally-true\\\" fact that follows from the existing hypotheses. It does this by applying some existing lemma whose antecedents match the in-context hypotheses. Therefore, the user must specify which lemma they intend to apply to construct the new hypothesis (as well as which in-context hypotheses satisfy its antecedents). We alluded to this terminological detail in passing in \\u00a72 of our original submission, but we recognize that this nonstandard usage may be confusing for some readers; we appreciate your highlighting this ambiguity and, consequently, have added an additional clarification to that section.\\n\\n> **Q2.** It might be interesting to include a qualitative analysis of the additional theorems proven using the synthetic dataset.\\n\\nThank you for this suggestion; we have added an analysis of the additional proved theorems to Appendix G.\"}", "{\"comment\": \"Thanks to the authors for their responses\\u2014they\\u2019ve addressed some of my concerns. That said, I still feel the effectiveness of synthetic theorems hasn\\u2019t been fully demonstrated:\\n- Even with a massive number of synthetic theorems (1 billion vs. 208 million tokens in Mathlib, as pointed out by Reviewer tn5n), the improvement seems relatively small (e.g., just 3 additional theorems).\\n- While Figure 3 and Appendix H show efforts to promote diversity in synthetic theorem generation\\u2014and most of us agree that diverse synthetic theorems can help LLM-based theorem proving\\u2014the diversity hypothesis for the Lean proving task doesn\\u2019t feel sufficiently tested. There aren\\u2019t convincing ablation studies or a clear link between diversity and the prover\\u2019s performance.\\n\\nBecause of this, I\\u2019ve decided to stick with my original score.\\n\\nOn the ``have`` tactic: the authors said it \\u2018introduces a new hypothesis using a lemma from the library.\\u2019 My impression, though, is that it introduces a hypothesis that can then be proved like a normal lemma or theorem, using various tactics and lemmas from the library\\u2014not just a single lemma. Either way, thanks for clarifying!\"}", "{\"comment\": \"Thank you for your response; it partially addresses some of my concerns, so I have raised my score to 5. While I understand that the proposed datasets are intended to complement existing manually constructed datasets, I find that they lead to only very modest improvements in the LLMs' performance. This raises questions about the overall impact and significance of the datasets in improving the theorem-proving capabilities of LLMs.\\n\\nMoreover, although some theorems could be proved only after fine-tuning on the extended dataset, others fail to be proved after fine-tuning. Could you please explain why some theorems fail to be proved only when trained on the extended dataset? Is this due to possibly randomness in the experiments?\\n\\nAdditionally, it seems that the authors rely solely on timeout as the evaluation metric. Could you also report Pass@k metrics or indicate how many search attempts the proposed methods can perform within a 10-minute time limit? This would offer a clearer understanding of the method's efficiency and practical utility.\"}", "{\"summary\": \"This paper introduces a novel approach to synthesizing new theorems in Lean. It utilizes forward reasoning tactics to construct new theorems based on a collection of existing proof states from the Mathlib library or Lean workbook. Additionally, it incorporates premise selection and proposes innovative search strategies to ensure the diversity and usefulness of the generated theorems. By using these newly generated theorems as complementary training corpus, the resulting theorem proving agent demonstrates an improvement of approximately 1% (3 problems) over the ablated setting.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Given the current status of formal theorem proving, the need for a method capable of synthesizing theorems across a broad domain in Lean is critical. Therefore, the motivation for this work is compelling, and it represents a significant attempt to address the theorem generation problem in a wide context.\", \"This paper explores a novel approach to constructing new theorems in Lean. The use of forward reasoning to construct theorems has previously only been applicable in term mode, and I believe this is the first work to introduce this approach in tactic mode.\", \"The paper introduces several interesting components to ensure that the generated theorems are diverse and useful while eliminating unnecessary steps in the generated proofs.\"], \"weaknesses\": [\"The usefulness of these synthesized theorems is a significant concern. The ablation performance using these generated theorems is quite weak, with only 3 additional theorems proven in miniF2F in both settings (M1 and M2). This is notable given that the tokens for the synthesized proofs amount to approximately 1 billion, compared to 208 million tokens in Mathlib.\", \"(Please correct me if I'm wrong) It appears that the tactics used to develop these forward reasoning steps are limited to a small subset, with only 5 tactics described in lines 296 to 301. Does this imply that the generated theorems will also only employ these 5 tactics? Considering the large variety of backward reasoning tactics available in Lean, generating theorems based solely on this limited subset of forward reasoning tactics could result in a biased set of synthetic data. I assume this is why the performance with these synthetic data is not more prominent.\", \"The paper is not very well written. Although the logic is easy to follow, the graphs are not particularly illustrative and lack detail. It would be beneficial to include a concrete, illustrative example when describing the method, as this would make the paper much easier to understand.\"], \"questions\": [\"Is it true that the base tactics used for theorem generation (excluding variants with specific parameters) consist of only five (or a slightly larger number)? Will the final generated theorems be limited to tactics from this set?\", \"Could you provide some concrete examples of the generated theorems? A detailed process showing how one of these theorems is created would be particularly helpful.\", \"In lines 73\\u201376, references for \\\"Some\\\" and \\\"Other techniques\\\" should be added.\", \"In lines 115\\u2013116, I believe the references \\\"(An et al., 2024; Polu & Sutskever, 2020; Zombori et al., 2021)\\\" are misplaced. These do not pertain to methods used for generating theorem statements.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a novel framework for generating synthetic theorems and proofs in Lean. New theorems and proofs are generated through forward proving, and a search procedure is enhanced to encourage diversity of theorems and proofs. Experimental evaluation shows modest improvement with the newly synthetic dataset. At a high level, creating a new and diverse dataset for theorem proving is a valuable and important contribution. However, the main concern raised by all reviewers and shared by the AC is whether this general idea of dataset synthesis really helps theorem proving. The improvement of approximately 1% (i.e., 3 problems) is somewhat encouraging, but not convincing enough to show its effectiveness. The authors are encouraged to pursue this line of work, since the general methodology sounds promising, while more careful development and evaluation especially regarding diversity should be conducted convincingly.\", \"additional_comments_on_reviewer_discussion\": \"There are active discussions between authors and reviewers. Relatively minor clarification questions and missing recent related works are carefully addressed by the authors and further acknowledged by reviewers. Careful discussions and new evaluation suggestions regarding the major concern of sufficient effectiveness of the proposed approaches are engaged. However, the current work as it is hasn't yet convinced reviewers as well as the AC.\"}", "{\"comment\": \"We thank the reviewer for their thoughtful feedback and suggestions. We respond below to the concerns and questions raised:\\n\\n> **W1.** \\u2026it seems the authors have not established a robust metric to assess the diversity of the generated theorems or to quantify how this diversity impacts the performance of fine-tuned neural provers.\\n\\nWe appreciate your observation of the importance of our search algorithm to our work\\u2019s contribution. We did not perform a post-hoc assessment of theorem diversity because there are two relevant but somewhat orthogonal notions of diversity, both of which are principally controlled by the configuration of the generator prior to generating synthetic theorems and for which there does not appear to be consensus on clear evaluation metrics. However, we have added an additional metric regarding subject-matter diversity, which we explain below.\\n\\nThe first relevant notion of diversity is the diversity of the proofs themselves. The algorithm in Figure 3 promotes this type of diversity by ensuring that proofs diverge from one another as early as possible. Because this is the case, there is little room to further promote diversity in the search algorithm itself, since any other algorithm could only produce proofs with greater overlap among them.\\n\\nThe second relevant notion of diversity is the diversity of the subject matter of the theorem statements themselves\\u2014i.e., the mathematical fields to which they apply. To promote this form of diversity, we ran our generator over a large number of Mathlib modules, representing a wide array of modern mathematics. Since the generator draws its initial proof states (including the initial hypotheses from which the generator reasons forward) from these modules, the theorems it produces will span many mathematical disciplines. To further quantify this diversity of topics, we have added a table in Appendix H displaying the number of theorems in our synthetic dataset generated using initial states from each top-level Mathlib submodule (e.g., `Algebra`, `Analysis`, `Topology`).\\n\\n> **W2.** \\u2026it would be beneficial to include more comparisons between autoformalization and the proposed synthetic approach.\\n\\nWe appreciate the reviewer\\u2019s questions about comparisons between autoformalization and our synthetic approach. We would like to clarify a key point about the relationship between these approaches: Our synthetic method is complementary to and can be applied to any proof dataset, whether human-written or autoformalized. Rather than viewing these as competing approaches that need direct comparison, our method serves as an enhancement that can augment any existing proof corpus.\\n\\nIn our experiments, we demonstrated this versatility by successfully generating synthetic corpora from both human-written proofs (the Mathlib dataset discussed in \\u00a75.1 and Tables 1\\u20132) and autoformalized proofs (the Lean Workbook dataset used for the experiments in \\u00a75.2 and Table 3). In both cases, fine-tuning on the synthetically expanded datasets led to improved model performance compared to fine-tuning on the original datasets alone.\\n\\nThe empirical results suggest our method provides value regardless of the proof source, making direct comparisons between autoformalization and synthesis less relevant to evaluating our contribution. The more pertinent comparison is between models fine-tuned on a given dataset with and without our synthetic augmentation\\u2014a comparison our experiments directly address.\"}", "{\"comment\": \"Thank you for your feedback and for taking the time to review our responses. We are glad to hear that our responses addressed many of your questions and concerns, especially those regarding our work's impact in the field. We appreciate your suggestion to further refine our quality and performance analysis and will be sure to incorporate this into the final version of the paper.\\n\\nThank you once again for your reviews and for increasing your rating.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper presents a synthetic generator to create diverse new theorem and proof data for Lean 4 by forward reasoning and premise selection in existing libraries like Mathlib. By applying the generator to Lean Mathlib and Lean Workbook dataset, the authors demonstrate improvements in theorem-proving performance, with a fine-tuned Falcon2-11B model showing increased accuracy from 38.1% to 39.3% on the miniF2F benchmark.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is logically structured and comprehensively explains the generator\\u2019s architecture, from premise selection to proof synthesis search and minimization. The methodology for generating synthetic theorems through forward reasoning is well-justified.\\n2. The approach\\u2019s reliance on premise selection using the LeanDojo ReProver model ensures theorems remain relevant and mathematically grounded, leading to meaningful performance gains on the miniF2F benchmark. \\n3. This paper significantly advances synthetic data generation for theorem proving by moving beyond simple mutation to generate genuinely new theorems and mitigate the data scarcity issue in formal theorem proving. The use of forward reasoning allows for the production of diverse, novel theorems, which may expand formal datasets in ways that mutation alone may not achieve.\", \"weaknesses\": \"1. Marginal Improvement in Benchmark Performance: Despite generating millions of synthetic theorems and proofs, the fine-tuning of these theorems leads to only a modest improvement in miniF2F performance for the Falcon2-11B model trained on a mixed dataset (from 38.1% to 39.3%). This improvement is notably lower than state-of-the-art approaches such as DeepSeekProver v1.5 and InternLM Prover v2.5, which achieve over 60% on miniF2F dataset. This raises concerns regarding the quality and utility of the generated theorems for practical theorem proving.\\n2. Lack of Quality Metrics for Synthetic Theorems: The paper does not provide specific metrics or validation methods to assess the quality or relevance of the generated synthetic theorems. The absence of such metrics makes it difficult to determine whether the synthetic data aligns well with mathematically significant problems or contains inherent limitations affecting model performance.\\n3. Computational Requirements: The generation process requires significant computational resources (60 * 36 vCPU for 24 hours), even if the search depth is not very high (10). It may potentially limit accessibility for broader use.\", \"questions\": \"1. Given the modest improvement in miniF2F accuracy, are there any metrics or quality checks in place to evaluate the significance and validity of the generated theorems and proofs beyond their correctness by construction?\\n2. Which specific theorems in miniF2F were newly proved by the Falcon2-11B model fine-tuned with synthetic data? These examples could help clarify the types of problems that benefit from synthetic training.\\n3. Could this approach be modified or enhanced to better align with the needs of SOTA theorem-proving systems, potentially addressing the disparity in miniF2F performance compared to SOTA models? Is there any specific reason to use Falcon2-11B compared to other models?\\n4. In Appendix B, you assumed a TacticsFor function in Algorithm A that returns a set of applicable theorems at a proof state. How is this function implemented? Is it time-consuming since it may require to iterate over all potential theorems and variable assignments?\\n[1] Xin, Huajian, et al. \\\"DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search.\\\" arXiv preprint arXiv:2408.08152 (2024).\\n[2] Ying, Huaiyuan, et al. \\\"Lean Workbook: A large-scale Lean problem set formalized from natural language math problems.\\\" arXiv preprint arXiv:2406.03847 (2024).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a framework for generating synthetic Lean theorems and proofs to supplement the training corpus for neural theorem provers. The main approach involves forward proving, where proof states from existing Lean libraries are transformed using a small set of curated proof tactics and model-selected premises. The effectiveness of this framework has been empirically demonstrated on the miniF2F dataset, with the Falcon2-11B model showing improved performance when fine-tuned on the synthetically augmented Lean Workbook dataset.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Well-motived objective: augmenting existing formal corpus synthetically is surely of immerse interest, given the limited amount of formal corpus available.\", \"The paper is relatively well-written and easy to follow, and with good amount of implementation details in the appendix for reproducibility.\"], \"weaknesses\": [\"There is a lack of experiments validating the effectiveness of using diverse (synthetic) theorems. Based on my experience, synthetic data augmentation typically provides significant benefits, even when theorems are randomly mutated. A key innovation of this paper is the algorithm depicted in Figure 3, which aims to promote diversity. However, in the subsequent experiments, it seems the authors have not established a robust metric to assess the diversity of the generated theorems or to quantify how this diversity impacts the performance of fine-tuned neural provers.\", \"As an alternative method for augmenting the formal corpus, it would be beneficial to include more comparisons between autoformalization and the proposed synthetic approach. Does the synthetic approach generate more diverse theorems? With an equal amount of additional corpus from each method, would the synthetic approach yield better performance on the fine-tuned prover? Additionally, what impact might combining these two approaches have on overall performance?\"], \"questions\": [\"I would appreciate some more elaboration on why the 'have' tactic needs a lemma from the library to introduce a new hypothesis\", \"It might be interesting to include a qualitative analysis of the additional theorems proven using the synthetic dataset.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents an approach for generating synthetic formal theorems and their corresponding proofs by leveraging existing libraries through forward reasoning. Specifically, it focuses on modifying a given proof state by combining premise selection with a set of predefined tactics (e.g., `rewrite`, `simp_arith`). The method retrieves relevant lemmas and applies these tactics to generate new proof states. Additionally, the paper introduces several optimizations to enhance the diversity and quality of the generated theorems. These include a search algorithm to produce a broader variety of theorems within the same computational budget and the use of tactics like `omega` or `aesop` for proof minimization. By utilizing the Mathlib and Lean Workbook datasets, the approach can synthesize up to a million new theorems, expanding existing datasets. Experiments on miniF2F show that fine-tuning LLMs on this extended dataset leads to improved performance compared to fine-tuning on the original dataset alone.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"> The paper is well-written and easy to follow.\\n\\n> The proposed method is intuitive and is able to generate millions of new theorems from existing libraries. This addresses, to some extent, the critical challenge of data scarcity in formal theorem proving.\\n\\n> The experiments show that fine-tuning on the extended dataset could lead to improved performance.\", \"weaknesses\": \"> Lack of Technical Novelty in Synthetic Theorem Generation\\n\\nThe approach of using forward reasoning for generating synthetic theorems and proofs is not particularly novel, as it has been widely explored in previous work. While this paper distinguishes itself by generating theorems from existing libraries rather than basic axioms in Lean, it still follows a similar, well-established pipeline.\\n\\n> Potential Limited Diversity and Difficulty of Generated Theorems\\n\\nThe proposed method utilizes only five tactics applied in a linear fashion for synthetic generation, which may limit the variety of theorems it produces. As a result, the generated theorems tend to lack both diversity and complexity, particularly when compared to those written by humans. Additionally, the absence of examples of the generated theorems and their proofs in the paper makes it difficult to accurately evaluate the quality and difficulty of the generated datasets.\\n\\n> Marginal Experimental Results\\n\\nDespite the synthetic dataset being 10 times larger than the original (2B vs. 208M), the performance improvement is modest (e.g., the number of proved theorems only rises from 91 to 94, or 93 to 96). The paper also lacks an ablation study to evaluate the quality of the synthetic data or explain which theorems are newly proved. It\\u2019s unclear whether the additional proofs are due to the five tactics (or `omega`), or just the randomness of the experiments.\", \"nit\": \"Figure 1, which explains basic background information, feels unnecessary. It would be more useful to include examples of the synthetic datasets or newly proved theorems in the miniF2F benchmark after fine-tuning on the extended datasets. Additionally, in Table 1, the label for the number of random premises seems to be fractional, but it\\u2019s listed as a percentage (%) in the row title, which could be misleading.\", \"questions\": \"> Could you conduct experiments or analysis to evaluate the quality of the synthetic datasets? For instance, how does the LLM perform when fine-tuned solely on the synthetic data? Additionally, could you compare the performance of a model fine-tuned on an equivalent-sized synthetic datasets to one trained on the real-world datasets?\\n\\n> Could you also provide examples or newly proved theorems from the miniF2F-test, along with the specific tactics the model used to prove them? Additionally, do the theorems proved by the model fine-tuned on the extended datasets (e.g., 96 theorems) overlap with the theorems proved by its counterpart fine-tuned on the original datasets (e.g., 93 theorems), or are there distinct theorems unique to each?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **W3.** Marginal Experimental Results\\n\\nWhile the performance improvement from our synthetic data was modest, we believe its consistency provides support for the utility of our synthetic theorems. We note that there are avenues to diversify and enhance the generated data\\u2014such as expanding the number of forward-reasoning tactics as they become available in Mathlib, or replacing ReProver with a purpose-built premise-selection model (we selected ReProver for its ease of use and ubiquity in this area, though it is not directly optimized for this task)\\u2014that may improve the utility of the generated theorems without modifying the overall architecture of the generator. It is this broader architectural contribution that we believe distinguishes our work; as we noted in our response to item W1 above, there are multiple novel aspects we present in this regard, including our search/synthesis strategy (\\u00a7\\u00a74.2\\u20133), tactic-execution/simulation scheme (\\u00a74.1), LLM-based lemma selection (\\u00a74.4), and verification pipeline (\\u00a74.5). \\n\\nWith respect to your concern regarding ablation studies, we did evaluate the model both with and without synthetic training data, as shown in Table 3. We did not include the performance of the base model without any fine-tuning because its performance was quite poor. We also did not evaluate the model when trained solely on synthetic data because the intent of our dataset is to serve as a complement to existing real-world data, providing additional examples of key reasoning strategies in Lean, but not as an all-encompassing corpus of Lean/Mathlib reasoning tactics. Since our synthetic theorems are targeted in this manner, they are not designed to be the sole source of training data for a model.\\n\\nWe appreciate your comment that your evaluation of our data would be aided by assessing the newly proved theorems. To this end, we have added in Appendix G examples of theorems proved with but not without synthetic training data: as these demonstrate, the newly proved theorems are from multiple distinct domains and use a variety of tactics. Moreover, the performance of models fine-tuned with synthetic data consistently exceeded that of models fine-tuned only on Mathlib, supporting the efficacy of the synthetic data.\\n\\n> **W4.** Nit: Figure 1, which explains basic background information, feels unnecessary. It would be more useful to include examples of the synthetic datasets or newly proved theorems\\u2026 Additionally, in Table 1, the label for the number of random premises seems to be fractional, but it\\u2019s listed as a percentage (%) in the row title, which could be misleading.\\n\\nWe appreciate your suggestion regarding the addition of examples of synthetic theorems and newly proved theorems, and we have added these to Appendices F and G. We have added a figure with a concrete example of a synthetically generated theorem, per your suggestion, and have moved the former Figure 1 to an appendix. Thank you also for catching the discrepancy in Table 1; we have corrected this error.\\n\\n> **Q1.** Could you conduct experiments or analysis to evaluate the quality of the synthetic datasets? For instance, how does the LLM perform when fine-tuned solely on the synthetic data? Additionally, could you compare the performance of a model fine-tuned on an equivalent-sized synthetic datasets to one trained on the real-world datasets?\\n\\nThe intent of our dataset is to serve as a complement to existing real-world data, providing additional examples of key reasoning strategies in Lean. However, since our synthetic theorems are targeted in this manner, they are not intended to provide an exhaustive treatment of all possible tactics and reasoning strategies that are possible with Lean/Mathlib, and therefore are not designed to be the sole source of training data for a model.\\n\\n> **Q2.** Could you also provide examples or newly proved theorems from the miniF2F-test, along with the specific tactics the model used to prove them? Additionally, do the theorems proved by the model fine-tuned on the extended datasets\\u2026overlap with the theorems proved by its counterpart\\u2026?\\n\\nWe have added in Appendix G several examples of theorems that were proved by a model fine-tuned on the synthetic datasets but not proved by the same model fine-tuned only on Mathlib. To specifically address your second question, there were also theorems proved by the model fine-tuned only on Mathlib data that were not proved by the model fine-tuned on synthetic data\\u2014neither set of proved theorems (with synthetic fine-tuning data or with Mathlib-only fine-tuning data) was a subset of the other.\"}", "{\"comment\": \"We appreciate these thoughtful comments regarding the evaluation of our synthetic datasets. While the improvements in proof search performance may appear modest, we believe they are meaningful given that they were achieved through synthetic data alone, without any additional techniques. The consistent improvements across different evaluations suggest that our synthetic data generation approach provides value as a complementary method to existing datasets.\\n\\nRegarding the cases where theorems become unprovable after fine-tuning on the extended dataset, we have investigated several potential causes. While experimental randomness may play a role, we believe a key factor is the lack of model alignment after fine-tuning. Our synthetic data focuses on improving the model's understanding of Lean and mathematical reasoning fundamentals, but was not specifically aligned for competitive mathematical problem-solving. This suggests an opportunity to combine our synthetic data approach with model alignment to potentially achieve better results.\\n\\nRegarding evaluation metrics, our approach differs slightly from the traditional Pass@k metric since our model generates tactics sequentially rather than the entire proof. During the 10-minute timeout period, our models generate an average of 43873 tactics at timeout for fine tuned M1 on synthetic dataset, and an average of 61386 tactics at timeout for fine tuned M2 on synthetic dataset. We believe these metrics offer a more complete picture of our method's practical performance.\"}", "{\"comment\": \"We thank the reviewer for their in-depth feedback and suggestions. We respond below to the concerns and questions raised:\\n\\n> **W1.** \\u2026the fine-tuning of these theorems leads to only a modest improvement in miniF2F performance\\u2026notably lower than state-of-the-art approaches such as DeepSeekProver v1.5 and InternLM Prover v2.5, which achieve over 60% on miniF2F dataset.\\n\\nWhile our particular experiments did not achieve the same performance as SOTA approaches like DeepSeekProver and InternLM, there are several factors that contribute to this disparity; because of this, and because of the more broadly applicable nature of our work, we believe our contributions are still significant despite these differences in absolute performance.\\n\\n* As we elaborate upon in our response to item Q3 below, our focus was not on the architecture of our final proof-search model but rather on the synthetic theorem generator itself. Accordingly, we opted to build and fine-tune a proof-search approach representative of common prior work; our architecture was a parallelized version of that used by LeanDojo. Thus, it was expected that our performance would not match that of SOTA architectures like those used by DeepSeekProver and InternLM; instead, we aimed to show that synthetic data could be successfully employed to improve theorem-proving ability, which we believe the data in \\u00a75.2 evince.\\n\\n* As we note in \\u00a76, the diversity of our generator\\u2019s output could be further enhanced with expanded forward-reasoning capabilities in Mathlib. Our generator is capable of supporting a wide range of forward-reasoning strategies; currently, though, there is a relative paucity of strictly forward-reasoning tactics in Mathlib. This is not a restriction of our generator *per se*, but rather reflects the fact that many tactics in Mathlib operate exclusively or partially backward. However, many backward-reasoning simplification and rewriting strategies can be adapted to forward reasoning by performing the same operations at a hypothesis instead of a goal (this is how we were able to employ typically backward-reasoning tactics like `rewrite` and `simp` in a forward-reasoning manner). Therefore, if and as more forward-reasoning functionality is added to Mathlib, our generator could produce more varied theorems, equipping LLMs to reason using a wider range of tactics, while using the same architecture we have already presented.\\n\\n* As we note in \\u00a7\\u00a74.4 and 6, we relied on the LeanDojo ReProver model\\u2014which is limited compared to SOTA\\u2014for premise selection. We selected this model for its ease of use and the fact that it was trained on a related\\u2014but not precisely aligned\\u2014task. A more powerful or purpose-designed model could suggest more relevant theorems, producing proofs that more robustly demonstrate the effective use of both library lemmas and built-in tactics. Nonetheless, we believe the positive results we saw even using the ReProver model provides evidence for the utility of our approach irrespective of the additional benefit conferred by the selection of premise-selection model.\\n\\nMoreover, our primary contribution is our novel architecture for generating synthetic training data through forward reasoning in Lean, including a diversity-optimized search strategy (\\u00a7\\u00a74.2\\u20133), lightweight tactic-execution/simulation scheme (\\u00a74.1), LLM-based lemma selection (\\u00a74.4), and verification pipeline that accounts for Lean\\u2019s significant notational extensibility (\\u00a74.5). We believe these contributions, whose efficacy is supported by a modest but consistent performance improvement, still offer substantive benefits for LLM-based theorem proving.\"}", "{\"comment\": \"Thank you for your detailed response and the added section in the appendix, which addressed most of my questions and concerns. While there is a lot of room for further improvements on the qualities/effects of synthetically generated theorems and performance on minif2f, this work still presents a meaningful step toward addressing data scarcity in formal theorem proving. I am confident that with the suggested refinements, this work can significantly impact the field. Thus I raise my rating from 3 to 5.\"}" ] }
EdNSQHaaMR
Selective Task Group Updates for Multi-Task Optimization
[ "Wooseong Jeong", "Kuk-Jin Yoon" ]
Multi-task learning enables the acquisition of task-generic knowledge by training multiple tasks within a unified architecture. However, training all tasks together in a single architecture can lead to performance degradation, known as negative transfer, which is a main concern in multi-task learning. Previous works have addressed this issue by optimizing the multi-task network through gradient manipulation or weighted loss adjustments. However, their optimization strategy focuses on addressing task imbalance in shared parameters, neglecting the learning of task-specific parameters. As a result, they show limitations in mitigating negative transfer, since the learning of shared space and task-specific information influences each other during optimization. To address this, we propose a different approach to enhance multi-task performance by selectively grouping tasks and updating them for each batch during optimization. We introduce an algorithm that adaptively determines how to effectively group tasks and update them during the learning process. To track inter-task relations and optimize multi-task networks simultaneously, we propose proximal inter-task affinity, which can be measured during the optimization process. We provide a theoretical analysis on how dividing tasks into multiple groups and updating them sequentially significantly affects multi-task performance by enhancing the learning of task-specific parameters. Our methods substantially outperform previous multi-task optimization approaches and are scalable to different architectures and various numbers of tasks.
[ "Multi-Task Learning", "Multi-Task Optimization", "Proximal Inter-Task Affinity" ]
Accept (Poster)
https://openreview.net/pdf?id=EdNSQHaaMR
https://openreview.net/forum?id=EdNSQHaaMR
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z1n1Jmq6Ib", "xTvGEur7Nf", "xKb6KmgTnK", "q4Mo7g7a6s", "n4ZDXjBXHr", "mEa1c62iT5", "ieoXe2uZ1J", "f3jQOhsvRt", "c4Fj9xL5S7", "b6XplzrWH9", "aMGxIWbMYV", "ZyQ4vPq7Wa", "Xw5m3ZozZT", "VOR1mjIJty", "OwVGpqVZhk", "NhgCkL4Ij1", "MgaMDztkr3", "LmidChY3Oc", "LAMooGymBf", "KdhzPtDRmT", "G7OvptF70O", "ARHDwT49tw", "6l6CCM1i9B", "6X8D5Z8iIl", "6TZ5QmJeOZ", "56SxbmWVJF", "4zloDMscPl", "4ZiwHXJDkX", "3mjM2O15F1" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733213921903, 1730727603210, 1730463446310, 1733213276677, 1732433391095, 1730287530942, 1732433332543, 1733190527607, 1733195033833, 1737523667775, 1733226764033, 1732682767362, 1732433301497, 1733121075701, 1732435627604, 1733213674349, 1732837385943, 1734646445783, 1732433502845, 1732623065311, 1733103384006, 1733129104846, 1733126501374, 1732432546273, 1730363918544, 1732607618568, 1732432625340, 1733204259234, 1732433374986 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4885/Authors" ], [ "ICLR.cc/2025/Conference/Submission4885/Reviewer_KUiJ" ], [ "ICLR.cc/2025/Conference/Submission4885/Reviewer_hDHX" ], [ "ICLR.cc/2025/Conference/Submission4885/Authors" ], [ "ICLR.cc/2025/Conference/Submission4885/Authors" ], [ "ICLR.cc/2025/Conference/Submission4885/Reviewer_HAqj" ], [ "ICLR.cc/2025/Conference/Submission4885/Authors" ], [ "ICLR.cc/2025/Conference/Submission4885/Reviewer_akRc" ], [ "ICLR.cc/2025/Conference/Submission4885/Reviewer_KUiJ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4885/Authors" ], [ "ICLR.cc/2025/Conference/Submission4885/Authors" ], [ "ICLR.cc/2025/Conference/Submission4885/Authors" ], [ "ICLR.cc/2025/Conference/Submission4885/Reviewer_HAqj" ], [ "ICLR.cc/2025/Conference/Submission4885/Authors" ], [ "ICLR.cc/2025/Conference/Submission4885/Authors" ], [ "ICLR.cc/2025/Conference/Submission4885/Authors" ], [ "ICLR.cc/2025/Conference/Submission4885/Area_Chair_1f2r" ], [ "ICLR.cc/2025/Conference/Submission4885/Authors" ], [ "ICLR.cc/2025/Conference/Submission4885/Reviewer_HAqj" ], [ "ICLR.cc/2025/Conference/Submission4885/Authors" ], [ "ICLR.cc/2025/Conference/Submission4885/Authors" ], [ "ICLR.cc/2025/Conference/Submission4885/Authors" ], [ "ICLR.cc/2025/Conference/Submission4885/Authors" ], [ "ICLR.cc/2025/Conference/Submission4885/Reviewer_akRc" ], [ "ICLR.cc/2025/Conference/Submission4885/Reviewer_hDHX" ], [ "ICLR.cc/2025/Conference/Submission4885/Authors" ], [ "ICLR.cc/2025/Conference/Submission4885/Reviewer_HAqj" ], [ "ICLR.cc/2025/Conference/Submission4885/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to the comment\", \"comment\": \"We sincerely appreciate your continued interest in our work and your patience. Your constructive feedback, particularly regarding the generalization aspect of the work, is invaluable in helping us improve.\"}", "{\"summary\": \"This paper presents a multi-task learning method that adaptively groups tasks based on proximal inter-task affinity and then sequentially updates each group. It provides a theoretical explanation of the benefits of sequentially updating task groups and the role of incorporating task-specific parameters in reducing conflicts. Experimental results demonstrate the method's superior performance across various benchmarks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Solid analysis from a theoretical perspective: The paper provides theoretical insights to explain the effectiveness of the proposed method, including (i) the benefits of sequential updating of groups, and (ii) the role of incorporating task-specific parameters in reducing conflicts.\\n\\n2. The paper is well-written and well-organized.\", \"weaknesses\": \"1. Based on proximal inter-task affinity, what principle do we use for task grouping? Discussion on other principles should be included. For example, in [1], they use the Fisher Information Matrix, grouping the most heterogeneous tasks to mitigate conflicts.\\n\\n2. The motivation for introducing proximal inter-task affinity: After reading Appendix A.1, I still find it difficult to understand the motivation for introducing proximal inter-task affinity.\\n\\n3. Sequential learning on tasks [1], domains [3,4], and mini-batches [2] for alignment has been studied previously. It would be beneficial to compare the different theoretical perspectives between the proposed method and these prior studies.\\n\\n[1] Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models, ICML 2024 \\n[2] On the Origin of Implicit Regularization in Stochastic Gradient Descent, ICLR 2021 \\n[3] Sequential Learning for Domain Generalization, ECCV 2022 \\n[4] Gradient Matching for Domain Generalization, ICLR 2022\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The author proposes a method for addressing multi-objective problems by grouping objectives.\\nIn the process of considering relations between tasks (objectives), the concept of inter-task affinity was introduced, but additional computation was reduced by focusing on the update of task-specific parameters.\\nAdditionally, introducing the concept of affinity to group objectives is the author\\u2019s original idea and has been thoroughly analyzed both theoretically and experimentally.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Experimental Analysis\\nIn the field of deep learning, the analysis of batch sequences has not been extensively explored. The author argues that grouping certain objectives in multi-objective problems can be significantly beneficial from a global perspective and has demonstrated this experimentally. In cases where the multi-task learning (MTL) results outperform those of single-task learning (STL), the author\\u2019s method consistently achieves the highest performance, which serves as strong empirical support for the validity of the proposed approach.\\n\\n2. Logical Approach\\nThe author\\u2019s approach to deriving proxy task affinity is reasonable.\\nBy utilizing the loss after task parameter updates, the author effectively reduced additional computations\\nAlso, the proposed method has been theoretically proven to remain useful.\\nAdditionally, the explanation of the benefits of grouping derived from Theorem 2 is clearly written and easy to understand.\", \"weaknesses\": \"In my understanding, some questions remain regarding the actual utility of certain theoretical approaches.\\nThe author addresses the utility of multiple objectives in a local context, but optimization in the field of deep learning is far more complex.\\nIn practice, grouping the same classes together for optimization in classification tasks may be optimal for the currently updated classes locally; however, it is challenging to reach a global optimum.\\nI would like to see additional experimental evaluation on this matter.\\nI will leave detailed suggestions in the questions section.\", \"questions\": \"As I mentioned in the weaknesses section, I do not interpret the author\\u2019s theoretical analysis as indicating that the proposed multiple-objective method can be effectively solved on a global scale.\\nHowever, I am not suggesting the necessity of a stringent theoretical foundation.\\nI would like to see evidence that the author\\u2019s method can consistently provide an optimal point.\\nDemonstrating that the proposed method is robust across different batch sizes, numbers of groups, and optimization hyperparameters would effectively support its consistency and reliability.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the comment\", \"comment\": \"We sincerely appreciate your generous evaluation of our work and the improved score. Your constructive suggestions and insightful questions are incredibly helpful in guiding us to enhance our work further.\"}", "{\"title\": \"Rebuttal (Part 2)\", \"comment\": \"### **Answer to W4**\\nAlthough we have tested our algorithm on a large number of tasks to validate the scalability of the proposed methods, it is seamlessly applicable to case we have at least 2 tasks. For example, we validate our methods on the NYUD-v2 dataset with only three tasks and compare the performance improvements and clustering frequency during optimization in the tables below. It shows meaningful performance improvements even with just three tasks.\\n\\n- Performance on NYUD-v2 with three tasks\\n\\n|Task|Semseg (mIoU $\\\\uparrow$)|Depth (RMSE $\\\\downarrow$)|Normal (mErr $\\\\downarrow$)|$\\\\triangle_m$ (% $\\\\uparrow$)|\\n|:-|:-:|:-:|:-:|:-:|\\n|GD|38.69|0.635|25.61|-4.47\\n|ours|39.50|0.622|24.46|-1.40\\n\\n\\n- Clustering frequency for NYUD-v2 with three tasks\\n\\n| |Semseg|Depth|Normals\\n|:-|:-:|:-:|:-:|\\n|Semseg|1|-|-\\n|Depth|0.411|1|-\\n|Normals|0.330|0.276|1\"}", "{\"summary\": \"This paper proposes a novel MTL method in which tasks are dynamically grouped according to the conflict of different tasks during the optimization process and the model is optimized based on these grouped tasks. The method introduces only a limited computation cost and the experiment results show strong performance of their method compared to several existing MTL methods\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper proposes a novel optimization method to optimize the multi-task learning process from a new perspective.\\n\\n2. The method demonstrates promising results on the NYUv2, PASCAL-Context, and Taskonomy datasets.\\n\\n3. The paper provides a theoretical explanation of the advantages of sequential optimization of task groups and provides an analysis of convergence.\", \"weaknesses\": \"1. There are some incorrect statements in the article. For example, in Line 092, \\u201cThis perspective is not addressed in traditional multi-task optimization, which typically focuses solely on the learning of shared parameters.\\u201d is wrong, because the learning of task-specific parameters is considered in IMTL [1].\\n\\n2. The conclusion in Line 373-374, \\u201cThis suggests that grouping tasks with proximal inter-task affinity and subsequently updating these groups sequentially result in lower multi-task optimization. sequentially result in lower multi-task loss compared to jointly backpropagating all tasks.\\u201d does not seem relevant to the theorem above. Can you give a more detailed explanation?\\n\\n3. For the conclusion in Line 525-526, \\u201cWe observe that the affinity decay rate ... within a reasonable range.\\u201d, there is a lack of experimental results on the performance of models with different $\\\\beta$.\\n\\n[1] Liu, L., Li, Y., Kuang, Z., Xue, J.-H., Chen, Y., Yang, W., Liao, Q., & Zhang, W. Towards Impartial Multi-task Learning. ICLR, 2021.\", \"questions\": \"1. Why is the number of groups in Figure 3c not an integer?\\n\\n2. Line 523, \\\"Table 4c\\\" means \\\"Figure 4c\\\"?\\n\\n3. Can you compare the performance of random grouping during optimization?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal (Part 2)\", \"comment\": \"We present results comparing different task grouping strategies in the tables below. These strategies include randomly grouping tasks with a predefined number ($N(\\\\mathcal{M})$), grouping heterogeneous tasks, and grouping homogeneous tasks (our approach). The results clearly show that our method achieves superior performance compared to the other grouping scenarios.\\n\\n- Comparison of Various Grouping Strategies on Taskonomy.\\n\\n|Task|DE|DZ|EO|ET|K2|K3|N|C|R|S2|S2.5|$\\\\triangle_m$ (% $\\\\uparrow$)|\\n|:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n|Heterogeneous|0.0172|0.0176|0.1252|0.1741|0.1700|0.0920|0.2475|0.7781|0.1743|0.1849|0.1660|-3.10\\n|Random ($N(\\\\mathcal{M})$=2)|0.0177|0.0180|0.1259|0.1741|0.1707|0.0923|0.2662|0.7807|0.1757|0.1871|0.1617|-4.24\\n|Random ($N(\\\\mathcal{M})$=3)|0.0172|0.0177|0.1250|0.1741|0.1703|0.0920|0.2619|0.7754|0.1749|0.1866|0.1607|-3.35\\n|Random ($N(\\\\mathcal{M})$=4)|0.0183|0.0187|0.1277|0.1746|0.1706|0.0936|0.2812|0.7841|0.1804|0.1882|0.1636|-6.12\\n|Random ($N(\\\\mathcal{M})$=5)|0.0186|0.0184|0.1274|0.1747|0.1708|0.0935|0.3150|0.7842|0.1800|0.1888|0.1640|-7.17\\n|Random ($N(\\\\mathcal{M})$=6)|0.0208|0.0209|0.1349|0.1750|0.1721|0.0961|0.3334|0.8222|0.1976|0.1935|0.1703|-13.20\\n|Ours|0.0167|0.0169|0.1228|0.1739|0.1695|0.0910|0.2344|0.7600|0.1691|0.1836|0.1571|-0.64\"}", "{\"title\": \"Response to the authors\", \"comment\": \"Thanks for your feedback. After reading reviews from other reviewers and the authors' responses, I decided to maintain my score.\"}", "{\"comment\": \"Thanks for your response. I have raised my score to 6.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Comments for reviewer\", \"comment\": \"We appreciate your constructive feedback and thoughtful opinions on our work, as your suggestions are invaluable in helping us make further improvements. To address your comments, we conducted additional experiments using a relatively larger batch size, although we are unsure if we have fully addressed your concerns. Given the time limit, we hope our additional comments have addressed your concerns effectively, and we sincerely thank you for the effort and care you have devoted to reviewing our work.\"}", "{\"title\": \"Response to the Comments\", \"comment\": \"We deeply appreciate your response and once again thank you for your efforts in reviewing our work. Your insightful questions are highly valued, as the grouping strategy and its analysis are crucial aspects of this study. Below are our detailed responses, and we welcome any further questions or discussions.\\n\\n---\\n\\n### **1. I would like to know how you define \\\"heterogeneous\\\" here. Does it refer to two tasks with significant conflicts?**\\n\\nDefining \\\"heterogeneous\\\" tasks is indeed critical for explaining the aforementioned experiments, and we appreciate your observation. As you pointed out, we should clarify how the \\\"heterogeneous\\\" setting is implemented. \\n\\nThe similarity between tasks depends on the nature of their objectives, but it can be measured using a metric to further enhance learning algorithms. In our work, we use *proximal inter-task affinity* as this metric. By tracking proximal inter-task affinity during optimization, we can represent the similarity or disparity between tasks in a matrix format. Taking the negative value of the tracked affinity between two tasks represents their disparity instead of similarity. \\n\\nUsing this measure, we cluster tasks based on disparity to create \\\"heterogeneous\\\" task groups. Thus, the \\\"heterogeneous\\\" setting refers to the proposed *selective task group updates algorithm* that clusters tasks with negative proximal inter-task affinity.\\n\\n---\\n\\n### **2. From my understanding, heterogeneous should serve as a lower bound for MTL, but the results are the same when randomly divided into two groups.**\\n\\n\\nWe mistakenly reported the same results for the \\\"heterogeneous\\\" setting and \\\"Random ($N(\\\\mathcal{M}) = 2$).\\\" We sincerely apologize for any confusion caused by this error. The corrected results have now been updated in the respective table.\\n\\nHowever, regarding your observation that \\\"heterogeneous should serve as a lower bound,\\\" our experimental results reveal a different trend. In this setting, the \\\"heterogeneous\\\" strategy demonstrates better performance compared to random selection criteria. A more detailed explanation of this phenomenon is provided in our response to Question 3.\\n\\n---\\n\\n### **3. It seems that the more groups there are, the worse the performance becomes in the experimental results. Could you analyze and explain this phenomenon? I am very interested in the reasons behind it.**\\n\\nThank you for this insightful question, as it highlights a non-trivial aspect of our method. Before addressing this, let us clarify that these experiments are conducted in *dynamic settings*, where task grouping changes across batches during optimization.\\n\\nIn the \\\"heterogeneous\\\" setting, we track proximal inter-task affinity dynamically during optimization and group heterogeneous tasks for each batch. (The affinity fluctuates continuously during training.) In contrast, in the random grouping strategy, tasks are grouped randomly for each batch.\\n\\nTo explain the observed phenomenon, we consider two key factors: \\n1. **Randomness in the clustering process** \\n2. **Cluster characteristics (e.g., number of clusters and task grouping criteria)** \\n\\nFirst, when comparing random grouping scenarios to our method, it becomes clear that random grouping during optimization negatively affects multi-task performance. This is due to the instability introduced by the randomness in forming task groups for each batch. As the number of task groups ($N(\\\\mathcal{M})$) increases, the randomness intensifies because there are more possible combinations of groups. However, when the number of groups equals the number of tasks (the \\\"Separate\\\" case in Table 5 of the main paper, showing -2.48 (% $\\\\uparrow$) in $\\\\triangle_m$), the randomness is eliminated because each task is assigned to its own group, leaving no room for variability in the grouping process. As a result, the performance improves compared to randomly clustered task sets. Similarly, in the \\\"heterogeneous\\\" setting, although the clustering targets heterogeneous tasks, the randomness in grouping is reduced because tasks are grouped based on their tracked affinity during optimization. This explains why the \\\"heterogeneous\\\" setting outperforms random grouping strategies.\\n\\nSecond, we observe that grouping homogeneous tasks (tasks with high proximal inter-task affinity), as adopted in our method, yields better performance than grouping heterogeneous tasks (tasks with low proximal inter-task affinity). This indicates that grouping homogeneous tasks, even though their affinities fluctuate during optimization, consistently outperforms alternative grouping strategies.\\n\\n---\\n\\nWe sincerely welcome further questions or discussions to address any remaining uncertainties or areas of interest.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thank you for your time and effort in reviewing our paper. The feedback provided was invaluable in refining our work. In the revised version, we address your concerns both from a theoretical perspective and through additional supporting experiments. We welcome any further feedback or discussion.\\n### **Answer to W1**\\n### The proposed theoretical analysis can be extended to a global context.\\nThank you for your valuable feedback. We appreciate your thoughtful concerns about the utility of the theoretical approaches and are grateful for your gracious evaluation of our theoretical analysis. Before addressing the experimental results suggested by the reviewer, we would like to defend some aspects of our approach.\\n\\nOur theoretical analysis demonstrates the utility of selective grouping updates in multi-task optimization at the batch level for simplicity of explanation. However, this approach can be extended to a global scale, as the primary advantage of task grouping updates lies in enhancing the learning of task-specific parameters. Selective updates improve optimization by enabling better learning of task-specific parameters at the batch level, as shown in the proof process of Theorem 5 (via the inequality $B_{i,j,k \\\\rightarrow k}^{t+(m-1)/M} \\\\leq B_{j; i,k \\\\rightarrow k}^{t+(m-1)/M}$, which is driven by updates to task-specific parameters). Since task-specific parameters exclusively influence the outputs of their respective tasks, the reasoning behind our batch-level analysis (Theorem 5) naturally extends to the global scale for all tasks, from t=1 to T and m=1 to $M$.\\n\\nHowever, as the reviewer pointed out, this theoretical analysis alone may not be sufficient to fully establish the utility of the proposed methods. Therefore, we focus on demonstrating the robustness of the algorithm under varying batch sizes and affinity decay rates ($\\\\beta$), which is important hyperparmeter in our optimization. In our approach, we do not explicitly determine the number of groups, as it is dynamically determined by the proximal inter-task affinity we track. So, we demonstrate the superiority of our method compared to other grouping strategies across different grouping scenarios with varying numbers of groups.\\n\\n#### **Additional Experiments**\\nWe compare our method with single-gradient descent (GD) to evaluate how robustly it improves multi-task performance across varying batch sizes. The proposed optimization method consistently demonstrates performance improvements ($\\\\triangle_m$ (% $\\\\uparrow$)) of 5.27%, 5.71%, and 6.13% across different batch sizes, outperforming previous optimization methods in terms of performance gains.\\n\\n\\n- Results on Taskonomy with varying batch sizes using ViT-B (batch sizes in brackets).\\n\\n|Task|DE|DZ|EO|ET|K2|K3|N|C|R|S2|S2.5|$\\\\triangle_m$ (% $\\\\uparrow$)|\\n|:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n|Single Task|0.0183|0.0186|0.1089|0.1713|0.1630|0.0863|0.2953|0.7522|0.1504|0.1738|0.1530|-\\n|GD(4)|0.0208|0.0214|0.1323|0.1747|0.1723|0.0952|0.2768|0.8214|0.1936|0.1921|0.1677|-10.88\\n|Ours(4)|0.0185|0.0190|0.1273|0.1741|0.1709|0.0928|0.2739|0.7957|0.1809|0.1888|0.1632|-6.19\\n|GD(8)|0.0188|0.0197|0.1283|0.1745|0.1718|0.0933|0.2599|0.7911|0.1799|0.1885|0.1631|-6.35\\n|Ours(8)|0.0167|0.0169|0.1228|0.1739|0.1695|0.0910|0.2344|0.7600|0.1691|0.1836|0.1571|-0.64\\n|GD(16)|0.0172|0.0180|0.1248|0.1742|0.1711|0.0920|0.2280|0.7641|0.1706|0.1848|0.1589|-1.94\\n|Ours(16)|0.0153|0.0154|0.1186|0.1737|0.1682|0.0893|0.1967|0.7334|0.1581|0.1780|0.1516|+4.19\\n\\nWe examine the influence of the affinity decay rate $\\\\beta$, an important hyperparameter in the proposed optimization method. The results demonstrate that the proposed optimization method consistently improves performance across varying $\\\\beta$ values, reducing the need for extensive hyperparameter tuning in real-world applications.\\n\\n- Results on Taskonomy with different affinity decay rates $\\\\beta$.\\n\\n|Task|DE|DZ|EO|ET|K2|K3|N|C|R|S2|S2.5|$\\\\triangle_m$ (% $\\\\uparrow$)|\\n|:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n|Single Task|0.0183|0.0186|0.1089|0.1713|0.1630|0.0863|0.2953|0.7522|0.1504|0.1738|0.1530|-\\n|GD|0.0188|0.0197|0.1283|0.1745|0.1718|0.0933|0.2599|0.7911|0.1799|0.1885|0.1631|-6.35\\n|$\\\\beta$=0.0001|0.0165|0.0168|0.1224|0.1739|0.1693|0.0907|0.2304|0.7581|0.1683|0.1831|0.1571|-0.18\\n|$\\\\beta$=0.001|0.0167|0.0169|0.1228|0.1739|0.1695|0.0910|0.2344|0.7600|0.1691|0.1836|0.1571|-0.64\\n|$\\\\beta$=0.01|0.0167|0.0171|0.1232|0.1739|0.1698|0.0912|0.2362|0.7623|0.1705|0.1834|0.1576|-1.01\\n|$\\\\beta$=0.1|0.0167|0.0171|0.1231|0.1739|0.1695|0.0912|0.2355|0.7631|0.1697|0.1831|0.1575|-0.87\"}", "{\"comment\": \"Thank you for your response. Although you identified two key factors for the observed phenomenon, my concern remains unresolved. As I understand it, the motivation behind your method is that conflicts between different tasks during optimization lead to poor results. However, based on your findings, even when these conflicts are maximized, the performance is still better than random, suggesting that task conflicts might not be the core issue affecting multi-task optimization. This conclusion seems to weaken your motivation.\"}", "{\"title\": \"Reply to All Reviewers\", \"comment\": \"We greatly appreciate all the reviewers for their thorough review and constructive suggestions. Their feedback has been invaluable in guiding us to refine our work. We have addressed all the reviewers' concerns in the final version, and the key changes are summarized below:\\n\\n1. **Interpretation of Our Work in the Generalization Perspective (KUiJ, akRc):** \\n We have incorporated a generalization perspective in our work, comparing it with previous approaches. Specifically, we discuss the differences between multi-task optimization and domain generalization, particularly regarding which grouping strategy is beneficial for each respective goal. We include comparisons with previous multi-task domain generalization approaches in the theoretical perspective and present additional experiments showing which grouping strategy is advantageous for multi-task optimization in the revised version.\\n\\n\\n2. **Robustness of Optimization on a Global Scale and Under Varying Conditions (hDHX, akRc, HAqj):** \\n We extend our theoretical analysis to the global level by further developing batch-level analysis. We also prove the robustness of our optimization across different conditions, including variations in batch size, decay rate ($\\\\beta$), and grouping strategy. Furthermore, we include a three-task setting to demonstrate that our algorithm can boost multi-task performance even with fewer tasks.\\n\\n3. **Grouping Strategy and Its Effects on Multi-task Performance (KUiJ, akRc, HAqj):** \\n We present an ablation study on how different grouping strategies impact multi-task performance and generalization. We compare our method, which groups homogeneous tasks for optimization, with random grouping, predefined grouping, and heterogeneous grouping scenarios. Our results confirm that the proposed grouping strategy provides the best performance. These experiments have been incorporated into the revised version.\\n\\n4. **Addressing Minor Errors (HAqj):** \\n We have corrected typos and reworded statements that could lead to misunderstandings, improving the readability and clarity of the paper.\"}", "{\"title\": \"Response to the comment\", \"comment\": \"We sincerely appreciate your generous evaluation of our work and the improved score. We deeply regret any doubts caused by our insufficient explanation, particularly regarding the random scenario. Nonetheless, we greatly value your constructive suggestions and acknowledgment of our contributions.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We sincerely appreciate your response and your continued interest in our work. We address your remaining concerns below, supplemented with additional experiments.\\n\\n---\\n\\n### **One remaining question concerns the batch size used in your method, which appears to be relatively small. Do you have any comparative experimental results for larger batch sizes?**\\n\\nAs you pointed out, most multi-task optimization research, including our work, typically employs smaller batch sizes compared to single-task domains. This is mainly due to the significant memory and computational costs associated with optimizing multi-task networks as the number of tasks increases.\\n\\nIn our experiments, we used batch sizes similar to those in previous studies within this field. However, your concern about evaluating the proposed approach with larger batch sizes is both reasonable and valuable for further assessing our work. Therefore, we have conducted additional experiments to evaluate multi-task performance with larger batch sizes, as shown in the tables below. With increasing batch sizes, our method consistently demonstrates better performance compared to baselines. Moreover, larger batch sizes further enhance multi-task performance as they contribute to more stable tracking of proximal inter-task affinity during optimization (The performance gap increases even further by 7.45% with a batch size of 128). As highlighted in Table 1 of the main paper, our optimization method has a complexity of $\\\\mathcal{O}(1)$, in contrast to other optimization methods that require $\\\\mathcal{O}(\\\\mathcal{K})$ memory. This makes other methods prohibitive for large batch sizes. \\n\\n\\n- Results on Taskonomy with varying batch sizes using ViT-B (batch sizes in brackets).\\n\\n|Task|DE|DZ|EO|ET|K2|K3|N|C|R|S2|S2.5|$\\\\triangle_m$ (% $\\\\uparrow$)|\\n|:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n|GD(64)|0.0164|0.0164|0.1183|0.1738|0.1691|0.0893|0.1749|0.7235|0.1552|0.1759|0.1486|+4.34\\n|Ours(64)|0.0151|0.0139|0.1105|0.1732|0.1648|0.0880|0.1598|0.6946|0.1450|0.1706|0.1397|+9.57\\n|GD(128)|0.0163|0.0157|0.1151|0.1737|0.1681|0.0879|0.1576|0.7082|0.1490|0.1715|0.1437|+6.85\\n|Ours(128)|0.0124|0.0125|0.1071|0.1732|0.1641|0.0845|0.1415|0.6799|0.1361|0.1623|0.1346|+14.30\\n\\n\\nWe sincerely welcome any additional questions or discussions you may have.\"}", "{\"metareview\": \"The paper address the task of multi-task learning where negative transfer is a well known issues. The paper proposes a method to group tasks and then update each group. The process of grouping is based on the notion of proximal task affinity. Most reviewers agreed that the paper was well written and investigates an important problem statement. The the questions raised by the reviewers mainly fell into to themes of generalization and clarification of aspects of the theorem. The authors rebuttal addressed these issues and all reviewers are unanimous in their final recommendation. Hence, this paper should be accepted to ICLR.\", \"additional_comments_on_reviewer_discussion\": \"After the rebuttal, both Reviewer KUiJ and Reviewer HAqj raised their scores. There was a robust discussion period with interaction from all the reviewers and the authors.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thank you for your time and effort in reviewing our paper. Your thorough feedback was very helpful in refining our work. Specifically, we noticed some typos and statements that could cause misunderstandings for readers. We have revised these thoroughly, incorporating your feedback. We welcome any further feedback or discussion.\\n\\n### **Answer to W1**\\nThank you for pointing out the error in our statement. IMTL handles shared and task-specific parameters separately, so our original statement in line 92 was incorrect. What we intended to convey is that, in our approach, the updates to shared and task-specific parameters occur concurrently, with each influencing the other. Unlike previous optimization methods that treat these parameters independently, our approach integrates their interdependence through proximal inter-task affinity for improved multi-task optimization. We have revised the statement to: \\\"This perspective is not addressed in traditional multi-task optimization, which typically deals with the learning of shared and task-specific parameters independently,\\\" in the revised version.\\n\\n### **Answer to W2**\\nThe statements explain the main result of Theorem 5, which is the inequality $B_{i,j,k \\\\rightarrow k}^{t+(m-1)/M} \\\\leq B_{j; i,k \\\\rightarrow k}^{t+(m-1)/M}$. Specifically, $B_{i,j,k \\\\rightarrow k}^{t+(m-1)/M}$ represents the proximal inter-task affinity when updating all tasks $\\\\{i,j,k\\\\}$ together, while $B_{j; i,k \\\\rightarrow k}^{t+(m-1)/M}$ represents the affinity when updating tasks $\\\\{j\\\\}$ and $\\\\{i,k\\\\}$ sequentially, with $\\\\{i,k\\\\}$ having positive inter-task affinity. Considering the definition of proximal inter-task affinity, a higher proximal inter-task affinity with respect to task $k$ means lower loss for task $k$. Therefore, when this reasoning is extended across all tasks, grouping tasks with positive proximal inter-task affinity and updating them sequentially leads to lower multi-task loss.\\n\\n\\n### **Answer to W3**\\nWe examine the impact of the affinity decay rate $\\\\beta$ in the table below. The results demonstrate that our approach consistently improves performance across a wide range of $\\\\beta$ values, reducing the need for extensive hyperparameter tuning in practical applications. We have revised the statement in the updated version and included the experimental results in the supplementary material.\\n\\n- Results on Taskonomy with different affinity decay rates $\\\\beta$.\\n\\n|Task|DE|DZ|EO|ET|K2|K3|N|C|R|S2|S2.5|$\\\\triangle_m$ (% $\\\\uparrow$)|\\n|:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n|Single Task|0.0183|0.0186|0.1089|0.1713|0.1630|0.0863|0.2953|0.7522|0.1504|0.1738|0.1530|-\\n|GD|0.0188|0.0197|0.1283|0.1745|0.1718|0.0933|0.2599|0.7911|0.1799|0.1885|0.1631|-6.35\\n|$\\\\beta$=0.0001|0.0165|0.0168|0.1224&|0.1739|0.1693|0.0907|0.2304|0.7581|0.1683|0.1831|0.1571|-0.18\\n|$\\\\beta$=0.001|0.0167|0.0169|0.1228|0.1739|0.1695|0.0910|0.2344|0.7600|0.1691|0.1836|0.1571|-0.64\\n|$\\\\beta$=0.01|0.0167|0.0171|0.1232|0.1739|0.1698|0.0912|0.2362|0.7623|0.1705|0.1834|0.1576|-1.01\\n|$\\\\beta$=0.1|0.0167|0.0171|0.1231|0.1739|0.1695|0.0912|0.2355|0.7631|0.1697|0.1831|0.1575|-0.87\\n\\n\\n### **Answer to Q1**\\nThe proposed optimization method divides the task set into multiple groups for each batch. Over the course of 40,000 training iterations, we partition the iterations into approximately 50 intervals and calculate the average number of task groups within each interval for clarity in visualization. This averaging process explains why the number of groups shown in Figure 3c is not an integer.\\n\\n### **Answer to Q2**\\nThank you for pointing out the error in our paper. We have revised \\\"Table 4c\\\" in Line 523 to \\\"Figure 4c.\\\"\\n\\n### **Answer to Q3**\\nWe present an ablation study on various task grouping strategies and their impact on performance. Specifically, these strategies include randomly grouping tasks with a predefined number ($N(\\\\mathcal{M})$), grouping heterogeneous tasks, and grouping homogeneous tasks (our approach). The results clearly show that our optimization strategy achieves better performance.\\n\\n- Comparison of Various Grouping Strategies on Taskonomy.\\n\\n|Task|DE|DZ|EO|ET|K2|K3|N|C|R|S2|S2.5|$\\\\triangle_m$ (% $\\\\uparrow$)|\\n|:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n|Heterogeneous|0.0172|0.0176|0.1252|0.1741|0.1700|0.0920|0.2475|0.7781|0.1743|0.1849|0.1660|-3.10\\n|Random ($N(\\\\mathcal{M})$=2)|0.0177|0.0180|0.1259|0.1741|0.1707|0.0923|0.2662|0.7807|0.1757|0.1871|0.1617|-4.24\\n|Random ($N(\\\\mathcal{M})$=3)|0.0172|0.0177|0.1250|0.1741|0.1703|0.0920|0.2619|0.7754|0.1749|0.1866|0.1607|-3.35\\n|Random ($N(\\\\mathcal{M})$=4)|0.0183|0.0187|0.1277|0.1746|0.1706|0.0936|0.2812|0.7841|0.1804|0.1882|0.1636|-6.12\\n|Random ($N(\\\\mathcal{M})$=5)|0.0186|0.0184|0.1274|0.1747|0.1708|0.0935|0.3150|0.7842|0.1800|0.1888|0.1640|-7.17\\n|Random ($N(\\\\mathcal{M})$=6)|0.0208|0.0209|0.1349|0.1750|0.1721|0.0961|0.3334|0.8222|0.1976|0.1935|0.1703|-13.20\\n|Ours|0.0167|0.0169|0.1228|0.1739|0.1695|0.0910|0.2344|0.7600|0.1691|0.1836|0.1571|-0.64\"}", "{\"comment\": \"Thank you for your response, which addressed most of my questions. However, I still have some concerns regarding your reply to Q3.\\n\\n1. I would like to know how you define \\\"heterogeneous\\\" here. Does it refer to two tasks with significant conflicts?\\n\\n2. From my understanding, heterogeneous should serve as a lower bound for MTL, but the results are the same when randomly divided into two groups.\\n\\n3. It seems that the more groups there are, the worse the performance becomes in the experimental results. Could you analyze and explain this phenomenon? I am very interested in the reasons behind it.\\n\\nThank you again for your response. If you can address these concerns, I would be happy to improve my score.\"}", "{\"title\": \"Rebuttal (Part 3)\", \"comment\": \"The core novelty of our work lies in incorporating inter-task affinity into multi-task optimization and demonstrating its role in facilitating selective group updates through the proposed grouping strategy to enhance multi-task performance. The concept of proximal inter-task affinity serves as a technical tool, enabling the concurrent tracking of task affinities and seamless multi-task optimization.\\n\\nWe greatly value your insights and look forward to engaging in further discussions. Thank you for your thoughtful feedback!\"}", "{\"title\": \"Additional Response to the Comments\", \"comment\": \"We kindly ask you to consider the distinction between static and dynamic settings. In a static setting, where predefined task clusters are used without change during optimization, the randomness caused by the formation of task sets during optimization is eliminated. As a result, grouping heterogeneous tasks serves as a lower bound compared to a random scenario. However, in a dynamic setting, heterogeneous task grouping does not serve as a lower bound because additional randomness exists as the formation of task clusters fluctuates during optimization.\\nWe validate our method in both settings and demonstrate significant performance improvements in each. This highlights that the grouping strategy, which depends on task conflicts, is one of key factors for optimizing multi-task learning.\\n\\nWe sincerely appreciate your patience and, if permitted, look forward to any further questions or discussions.\"}", "{\"title\": \"Response to the Comments\", \"comment\": \"Thank you for patiently reviewing and paying attention to our work. Task conflicts remain a core issue affecting multi-task optimization. To further clarify our assertions, we present additional experimental results for NYUD-v2 and PASCAL-Context in the tables below.\\n\\nIn these experiments, we use a **static** setting where task groups are predefined before optimization begins, and the optimization proceeds using these fixed task groups. \\nIn this setup, the clustering process's randomness during optimization is minimized because the task groups remain fixed throughout the optimization. Thus, we can isolate and evaluate the influence of task grouping (specifically, the cluster characteristics, which is the second factor we highlighted) on the optimization methods.\\n\\nWe refer to this random grouping scenario under the static setting as **\\\"Static Random\\\"**, where the number of task groups is predefined as $N(\\\\mathcal{M})$, and the tasks are randomly selected without any changes during optimization. When the randomness in the clustering process across batches is eliminated, the heterogeneous setting serves as a lower bound for the random scenario. Our method, which dynamically clusters homogeneous tasks, consistently outperforms other grouping strategies.\\n\\nWe also emphasize that the performance gap resulting from different grouping strategies is significant. (Please refer to Table 3 of the main paper for comparison.) It shows that **task conflicts remain the core issue affecting multi-task optimization.**\\n\\nSince task relations dynamically fluctuate during optimization, both the randomness in task grouping and the criteria for forming task groups are critical factors in evaluating our method. We hope that the experiments conducted in the dynamic setting (as addressed in Answer to Q3) and the static setting (as shown in the tables below) address your questions.\\n\\n\\n- Comparison of Various Grouping Strategies on NYUD-v2.\\n\\n|Task|Semseg (mIoU $\\\\uparrow$)|Depth (RMSE $\\\\downarrow$)|Normal (mErr $\\\\downarrow$)|Edge (odsF $\\\\uparrow$)| $\\\\triangle_m$ (% $\\\\uparrow$)|\\n|:-|:-:|:-:|:-:|:-:|:-:|\\n|Heterogeneous|38.12|0.620|24.59|52.20|-5.14\\n|Static Random ($N(\\\\mathcal{M})$=2)|39.31|0.630|24.36|52.20|-4.50\\n|Static Random ($N(\\\\mathcal{M})$=3)|38.29|0.634|24.29|52.60|-5.05\\n|Ours|**40.02**|**0.618**|**24.09**|**53.90**|**-2.58**\\n\\n\\n\\n- Comparison of Various Grouping Strategies on PASCAL-Context.\\n\\n|Task|Semseg (mIoU $\\\\uparrow$)|Parsing (mIoU $\\\\uparrow$)|Saliency (maxF $\\\\uparrow$)|Normal (mErr $\\\\downarrow$)|Edge (odsF $\\\\uparrow$)|$\\\\triangle_m$ (% $\\\\uparrow$)|\\n|:-|:-:|:-:|:-:|:-:|:-:|:-:|\\n|Heterogeneous|66.45|56.25|82.12|17.50|33.80|-9.89\\n|Static Random ($N(\\\\mathcal{M})$=2)|67.35|56.71|82.43|17.35|34.50|-8.91\\n|Static Random ($N(\\\\mathcal{M})$=3)|66.58|56.23|82.21|17.51|34.80|-9.42\\n|Ours|**68.14**|**57.15**|**82.52**|**17.19**|**39.50**|**-6.24**\\n\\n\\nWe greatly appreciate your patience. If you permit, we look forward to any further questions or discussions.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thank you for your time and effort in reviewing our paper. The perspectives you provided have been incredibly helpful in refining our work. In the revised version, we address the points raised and highlight the key differences between our work and previous studies. We welcome any further feedback or discussion.\\n### **Answer to W1, W3** \\n### Discussion on Task Grouping Strategy and Theoretical Differences with Domain Generalization Approaches.\\nThank you for your insightful questions about our work. Task grouping is a critical aspect of both multi-task optimization and multi-task domain generalization. To enhance our discussion, we have included relevant works suggested by the reviewer in our paper (Additional Related Works). The key distinction between our work and these studies is that they focus on inter-task relations from a domain generalization perspective.\\n\\nIn particular, [1] proposes grouping heterogeneous tasks to regularize them, promoting the learning of more generalized features across domain shifts. [2] explores generalization strategies at the mini-batch level. [3] addresses diverse domain shift scenarios by incorporating all possible sequential domain learning paths to generalize features for unseen domains. [4] focuses on generalization to unseen domains by reducing dependence on specific domains through inter-domain gradient matching.\\n\\nThe objectives of conventional multi-task optimization and domain generalization differ fundamentally. Conventional multi-task optimization typically assumes that the source and target domains share similar data distributions, whereas domain generalization focuses on scenarios with significant domain shifts. This distinction leads to different approaches in utilizing task relations to achieve their respective goals. As demonstrated in Theorems 1 and 2 of our work, in multi-task optimization, simultaneously updating heterogeneous tasks with low task affinity results in suboptimal optimization and increased loss compared to updating similar task sets with high task affinity. This aligns with findings from prior multi-task optimization studies introduced in the related works section. In contrast, domain generalization uses task sets as a tool to extract generalized features that can be applied across various unseen domains. Overfitting to similar tasks can degrade performance on unseen domains, making it beneficial to use heterogeneous tasks as a form of regularization.\\n\\nThus, in conventional multi-task optimization settings, it is more effective to group homogeneous tasks together. To support this with experimental evidence, we present results comparing different task grouping strategies in the tables below. These strategies include randomly grouping tasks with a predefined number ($N(\\\\mathcal{M})$), grouping heterogeneous tasks, and grouping homogeneous tasks (our approach). The results clearly demonstrate that grouping homogeneous task sets achieves superior performance under the proposed settings.\\n\\n\\n- Comparison of Various Grouping Strategies on Taskonomy.\\n\\n|Task|DE|DZ|EO|ET|K2|K3|N|C|R|S2|S2.5|$\\\\triangle_m$ (% $\\\\uparrow$)|\\n|:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n|Heterogeneous|0.0172|0.0176|0.1252|0.1741|0.1700|0.0920|0.2475|0.7781|0.1743|0.1849|0.1660|-3.10\\n|Random ($N(\\\\mathcal{M})$=2)|0.0177|0.0180|0.1259|0.1741|0.1707|0.0923|0.2662|0.7807|0.1757|0.1871|0.1617|-4.24\\n|Random ($N(\\\\mathcal{M})$=3)|0.0172|0.0177|0.1250|0.1741|0.1703|0.0920|0.2619|0.7754|0.1749|0.1866|0.1607|-3.35\\n|Random ($N(\\\\mathcal{M})$=4)|0.0183|0.0187|0.1277|0.1746|0.1706|0.0936|0.2812|0.7841|0.1804|0.1882|0.1636|-6.12\\n|Random ($N(\\\\mathcal{M})$=5)|0.0186|0.0184|0.1274|0.1747|0.1708|0.0935|0.3150|0.7842|0.1800|0.1888|0.1640|-7.17\\n|Random ($N(\\\\mathcal{M})$=6)|0.0208|0.0209|0.1349|0.1750|0.1721|0.0961|0.3334|0.8222|0.1976|0.1935|0.1703|-13.20\\n|Ours|0.0167|0.0169|0.1228|0.1739|0.1695|0.0910|0.2344|0.7600|0.1691|0.1836|0.1571|-0.64\\n\\n\\n\\nAdditionally, a notable difference between previous works and ours is that prior studies mainly focus on static settings, where inter-task relations remain fixed during learning. In contrast, our approach continuously tracks the fluctuating inter-task relations during optimization and leverages them for further optimization. The concept of proximal inter-task affinity enables this continuous tracking to occur concurrently during the optimization process.\"}", "{\"summary\": \"This paper discusses the challenges of multi-task learning, where training multiple tasks together in one architecture can lead to negative transfer or performance degradation. Traditional solutions focus on optimizing shared parameters but neglect task-specific ones. The proposed solution involves grouping tasks selectively and updating them in each batch, along with an algorithm that adapts to determine effective task grouping. The concept of proximal inter-task affinity is introduced to track task relations during optimization. This approach is said to improve multi-task performance by enhancing the learning of task-specific parameters and is shown to outperform previous methods, being scalable to different architectures and task numbers.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper investigates an important problem of multi-task learning.\\n2. This paper is well-written and easy to follow.\\n3. Realizing that traditional solutions focus on optimizing shared parameters but neglect task-specific ones, the authors delve into the concept of proximal inter-task affinity, making this paper well-motivated.\\n4. The proposed method is new to me and gives a fresh perspective to further improve the performance of MTL.\\n5. This approach is said to improve multi-task performance by enhancing the learning of task-specific parameters and is shown to outperform previous methods, being scalable to different architectures and task numbers.\", \"weaknesses\": \"1. The task grouping result in Figure 3c seems out of converge. Will the number of groups further increase as the iteration becomes larger?\\n2. Why Nash-MTL is not reported in Table 2?\\n3. In the theoretical analysis (Section 4), the authors explain how this sequential update strategy can improve multi-task performance from an optimization standpoint. What about the generalization standpoint? I think the generalization of a model is more important.\\n4. In real-world applications, a typical MTL problem may have only a few tasks (e.g., 3). Will the proposed method work in such a circumstance? What is the task grouping result if there are only three tasks?\", \"questions\": \"Please refer to the weaknesss section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the detailed additional explanations. The robustness of performance against hyperparameter changes and comparisons with other grouping methods sufficiently support the significance of the proposed optimization approach. One remaining question concerns the batch size used in your method, which appears to be relatively small. Do you have any comparative experimental results for larger batch sizes?\"}", "{\"title\": \"Rebuttal (Part 2)\", \"comment\": \"### **Answer to W2**\\nThe main challenge in implementing selective task group updates for multi-task optimization lies in concurrently tracking task relations and effectively utilizing them during optimization.\", \"the_inter_task_affinity_between_tasks_is_represented_as\": \"$$\\\\mathcal{A}^t_{i\\\\rightarrow k} = 1- \\\\frac{L_k(z^t, \\\\Theta_{s|i}^{t+1}, \\\\Theta_k^t)}{L_k(z^t, \\\\Theta_{s}^{t}, \\\\Theta_k^t)}$$\", \"the_proximal_inter_task_affinity_between_tasks_is_represented_as\": \"$$\\\\mathcal{B}^t_{G\\\\rightarrow k} = 1- \\\\frac{L_k(z^t, \\\\Theta_{s|G}^{t+1}, \\\\Theta_k^{t+1})}{L_k(z^t, \\\\Theta_{s}^{t}, \\\\Theta_k^t)}$$\\n\\nIn conventional multi-task optimization settings, both $\\\\Theta_s^t$ and $\\\\Theta_k^t$ are updated concurrently. However, $\\\\mathcal{A}^t_{i\\\\rightarrow k}$, which tracks pairwise inter-task relations, does not account for updates to the task-specific parameters $\\\\Theta_k^t$. Specifically, after updating the shared parameters from $\\\\Theta_s^t$ to $\\\\Theta_{s|i}^{t+1}$, it requires revisiting past optimization states $\\\\Theta_s^t$ to compute the relations for other tasks or to proceed with further updates of both $\\\\Theta_s^t$ and $\\\\Theta_k^t$. This process introduces significant computational overhead, making it challenging to directly integrate inter-task affinity into multi-task optimization.\\n\\nIn contrast, proximal inter-task affinity expands the scope of inter-task relations to include groups of tasks and integrates updates to task-specific parameters. This approach enables efficient tracking of task relations during the optimization process without the need to revisit previous states. By leveraging proximal inter-task affinity, task relations can be seamlessly incorporated into optimization, making the process computationally efficient and well-suited for fast convergence\\u2014an essential requirement in optimization.\"}", "{\"comment\": \"Thank you for your detailed response and comprehensive experiments. Your reply addressed most of my concerns. Although I still have some doubts about the results under dynamic random, I acknowledge the contribution of the paper. Therefore, I decided to raise my score.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thank you for your time and effort in reviewing our paper. We are particularly grateful for your positive feedback and the insightful, challenging questions that have helped improve our work. We have addressed some of your concerns in the revised version of the paper and welcome any further feedback or discussion.\\n### **Answer to W1**\\nThank you for your insightful and thought-provoking question. The number of task groups is determined by the proximal inter-task affinity, which is tracked throughout the optimization process. This affinity reflects the relations between tasks as training progresses. From an optimization perspective, as training advances, the task-specific gradients for shared parameters tend to diverge because each task has different objectives. This leads to a decrease in the proximal inter-task affinity, resulting in an increase in the number of task groups.\\n\\nHowever, simply increasing the number of training iterations will not result in more task groups, as there is an upper limit to the number of task groups. A similar limit is observed in Figure 3c, where the number of task groups fluctuates during optimization but never exceeds 4. Additionally, our experimental observations show a trade-off between task losses and performance as convergence is approached, suggesting that there is no further capacity for learning task-specific information that would alleviate conflicts among heterogeneous tasks in shared space. Therefore, we think the upper bound on the number of task groups is closely related to the capacity of task-specific parameters for each task, though predicting this upper bound theoretically can be quite challenging. When the number of iterations is increased without changing other conditions, the graph in Figure 3c simply stretches along the x-axis. On the other hand, incorporating additional task-specific parameters would enable the model to handle a more heterogeneous set of tasks, potentially leading to more task groups.\\n\\n\\n### **Answer to W2**\\nIn our experiments, Nash-MTL fails to converge on the Taskonomy benchmark when using ViT, resulting in very poor performance, which we denote by the dashed lines in the table. However, it successfully converges on the NYUD-v2 and PASCAL-Context datasets, with particularly promising results on NYUD-v2. We guess that the increasing number of tasks might hinder convergence, a similar issue that was also observed with MGDA.\\n\\n### **Answer to W3**\\nThank you for your insightful question. From a generalization perspective, the key issue that might be addressed in this work is how tasks should be grouped and optimized to improve generalization performance. Different viewpoints exist on this matter, depending on the target domain we aim to optimize for.\\n\\nIn conventional optimization settings, where the source domain (used for training) and the target domain (used for testing) have similar data distributions, grouping similar tasks typically reduces multi-task loss and enhances generalization. This is demonstrated in Theorems 1 and 2 of the paper. However, from a domain generalization perspective, grouping heterogeneous tasks may be more advantageous for improving generalization performance.\\n\\nIn this work, we focus on the scenario where the source and target domains share similar distributions, a common assumption in conventional multi-task optimization. As experimental evidence, we present an ablation study on different task grouping strategies and their impact on generalization performance, as shown in the table below. Specifically, these strategies include randomly grouping tasks with a predefined number ($N(\\\\mathcal{M})$), grouping heterogeneous tasks, and grouping homogeneous tasks (our approach). The results clearly demonstrate that grouping homogeneous task sets yields better generalization performance under the proposed settings.\\n\\n- Comparison of Various Grouping Strategies on Taskonomy.\\n\\n|Task|DE|DZ|EO|ET|K2|K3|N|C|R|S2|S2.5|$\\\\triangle_m$ (% $\\\\uparrow$)|\\n|:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n|Heterogeneous|0.0172|0.0176|0.1252|0.1741|0.1700|0.0920|0.2475|0.7781|0.1743|0.1849|0.1660|-3.10\\n|Random ($N(\\\\mathcal{M})$=2)|0.0177|0.0180|0.1259|0.1741|0.1707|0.0923|0.2662|0.7807|0.1757|0.1871|0.1617|-4.24\\n|Random ($N(\\\\mathcal{M})$=3)|0.0172|0.0177|0.1250|0.1741|0.1703|0.0920|0.2619|0.7754|0.1749|0.1866|0.1607|-3.35\\n|Random ($N(\\\\mathcal{M})$=4)|0.0183|0.0187|0.1277|0.1746|0.1706|0.0936|0.2812|0.7841|0.1804|0.1882|0.1636|-6.12\\n|Random ($N(\\\\mathcal{M})$=5)|0.0186|0.0184|0.1274|0.1747|0.1708|0.0935|0.3150|0.7842|0.1800|0.1888|0.1640|-7.17\\n|Random ($N(\\\\mathcal{M})$=6)|0.0208|0.0209|0.1349|0.1750|0.1721|0.0961|0.3334|0.8222|0.1976|0.1935|0.1703|-13.20\\n|Ours|0.0167|0.0169|0.1228|0.1739|0.1695|0.0910|0.2344|0.7600|0.1691|0.1836|0.1571|-0.64\"}" ] }
EdMb9TqqDY
Long-horizon Visual Instruction Generation with Logic and Attribute Self-reflection
[ "Yucheng Suo", "Fan Ma", "Kaixin Shen", "Linchao Zhu", "Yi Yang" ]
Visual instructions for long-horizon tasks are crucial as they intuitively clarify complex concepts and enhance retention across extended steps. Directly generating a series of images using text-to-image models without considering the context of previous steps results in inconsistent images, increasing cognitive load. Additionally, the generated images often miss objects or the attributes such as color, shape, and state of the objects are inaccurate. To address these challenges, we propose LIGER, the first training-free framework for Long-horizon Instruction GEneration with logic and attribute self-Reflection. LIGER first generates a draft image for each step with the historical prompt and visual memory of previous steps. This step-by-step generation approach maintains consistency between images in long-horizon tasks. Moreover, LIGER utilizes various image editing tools to rectify errors including wrong attributes, logic errors, object redundancy, and identity inconsistency in the draft images. Through this self-reflection mechanism, LIGER improves the logic and object attribute correctness of the images. To verify whether the generated images assist human understanding, we manually curated a new benchmark consisting of various long-horizon tasks. Human-annotated ground truth expressions reflect the human-defined criteria for how an image should appear to be illustrative. Experiments demonstrate the visual instructions generated by LIGER are more comprehensive compared with baseline methods. The code and dataset will be available once accepted.
[ "text to image generation", "visual instruction generation" ]
Accept (Poster)
https://openreview.net/pdf?id=EdMb9TqqDY
https://openreview.net/forum?id=EdMb9TqqDY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xxuUAarSeJ", "xSXXZ7Eg4u", "vtmJKJVazb", "u84ZRts2Un", "s7JWVuncTR", "nkpnNfr79e", "nYAXTbyE3A", "iwO9lq1BHK", "e8hXk1tNLZ", "c2AubClf4h", "WiXcUmpJKC", "VCpyM5oldU", "TgNM4rQhGQ", "SXBOx5bpQ3", "RfjnQVN4UH", "PV3cBNx4Iy", "OBN4Oa4had", "ImBPYpopui", "BWIS8GoYKz", "6nwtykD15f", "6U2APZkVNc" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732221602627, 1730380316286, 1732695234450, 1732684171561, 1732694820225, 1734483419148, 1732551950716, 1733201478562, 1732534966913, 1730660022973, 1732684354524, 1730705029921, 1732219930766, 1732222774002, 1730658454455, 1733042216480, 1737523388134, 1732224265573, 1732552487316, 1732263762174, 1732222836600 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission274/Authors" ], [ "ICLR.cc/2025/Conference/Submission274/Reviewer_UNJf" ], [ "ICLR.cc/2025/Conference/Submission274/Authors" ], [ "ICLR.cc/2025/Conference/Submission274/Reviewer_P3KH" ], [ "ICLR.cc/2025/Conference/Submission274/Reviewer_UNJf" ], [ "ICLR.cc/2025/Conference/Submission274/Area_Chair_429T" ], [ "ICLR.cc/2025/Conference/Submission274/Reviewer_nhsY" ], [ "ICLR.cc/2025/Conference/Submission274/Authors" ], [ "ICLR.cc/2025/Conference/Submission274/Authors" ], [ "ICLR.cc/2025/Conference/Submission274/Reviewer_nhsY" ], [ "ICLR.cc/2025/Conference/Submission274/Authors" ], [ "ICLR.cc/2025/Conference/Submission274/Reviewer_fPFq" ], [ "ICLR.cc/2025/Conference/Submission274/Authors" ], [ "ICLR.cc/2025/Conference/Submission274/Authors" ], [ "ICLR.cc/2025/Conference/Submission274/Reviewer_P3KH" ], [ "ICLR.cc/2025/Conference/Submission274/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission274/Authors" ], [ "ICLR.cc/2025/Conference/Submission274/Authors" ], [ "ICLR.cc/2025/Conference/Submission274/Authors" ], [ "ICLR.cc/2025/Conference/Submission274/Authors" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We provide a discussion on the valuable questions posed by the reviewer.\\n***\\n> **W1:** *Error analysis*.\\n\\nWe show some cases in Figure 9 in the appendix. In practice, LIGER employs a referee agent to judge the mistakes in the edited image. Figure 9 shows two types of errors in the edited images, i.e. **Reasoning error and generation error**. \\n\\nFor instance, in the reasoning error case, the precious step involves whisking eggs into the batter. The current step is to add vanilla extract. However, the error detector mistakenly assumes the egg should still be visible in the current step, which is not the case after whisking. The referee agent finds the error and logic error and keeps the draft image as the final output. \\n\\nGeneration error, on the other hand, is attributed to failures of the editing tools or the location tool. In the example, the balls are mistakenly removed and also the quality of the generated image is unsatisfactory. The referee agent, considering image quality, identifies the mistake and picks the draft image as the final output.\\n\\nHaving this rollout-and-compare manner, LIGER is robust toward the errors. It is also worth mentioning that there such errors are infrequent.\\n\\n> **W2:** *Robustness toward MLLM* \\n\\nWe conducted an ablation study substituting the GPT4-O model with open-source models Pixtral-12B and QwenVL-7B. Qualitative results in Table 6 reveal a performance degradation using the Pixtral-12B model, nevertheless, the self-reflection mechanism maintains effectiveness. MLLM capability influences the performance, as demonstrated by the inferior performance of QwenVL-7B compared to Pixtral-12B. We empirically find that the reasoning and image comprehension abilities are limited when using open-source models.\\n\\n> **W3:** *Error bars*.\\n\\nWe performed another trial over the 569 tasks, and the automatic evaluation results are shown in Table 7. The variance is relatively small. To further examine the variance, we randomly selected 50 tasks and ran 5 separate trials. Due to time and budget constraints, these additional trials were limited to the subset. Results in Table 8 show that the variance is also small and all the trials consistently outperform baseline methods.\\n\\n> **W4:** *API cost and visual memory.*\\n\\nA trial on the 569 tasks costs around 200 dollars in total, 0.035 dollars per image. \\n\\nThe task length does not affect the visual memory amount, since we update the visual memory every step and only store the memory for the current step. The visual memory is stored in a ''slicing window'' manner and invariant with step length. \\n\\n> **Q1:** *Motivation of the task and difference with consistent video generation*\", \"the_motivation_of_the_task_can_be_summarized_as\": \"(1) We identify challenges in the task of visual instruction generation, i.e. showing **object state changes, object consistency and scene diversity** between steps. The challenges are exemplified especially in long-horizon tasks. \\n\\n(2) Visual instruction generation has great **real-world application potential**. Visual instructions are crucial for human learning as they are intuitive. With the rise of social media, users often refer to image blog posts to quickly grab key information and the status of the tasks on their mobile devices. Generating image instructions meets the needs of users. Furthermore, this task can also be integrated with large language models, enabling them to respond to user queries with illustrative processes.\\n\\n(3) Visual instruction generation lays the foundation for future work. **The images generated by LIGER could serve as highlight frames to guide video generation**. Moreover, generating image instructions potentially assists other scenarios such as embodied agent planning. Agents or robots can adapt to new tasks using the generated instructions. \\n\\nRegarding the difference between image instruction generation and semantically consistent video generation, we find that existing **open-source text-to-video generation models primarily focus on single object scene consistency, and struggle to directly generate long videos from text that exhibit reasonable scene diversity and state transitions.** Image instruction generation requires object identity consistency, scene diversity, and object state changes to make images easy to understand. Moreover, text-to-video generation can be broken down into text-to-image generation and image-to-video generation. Images generated by LIGER can be used as the highlight frames for the videos, aiding video generation by facilitating the text-to-image generation phase.\\n***\\nWe hope the discussions address your concerns. We are happy to have more discussions with you. Thank you again for your valuable opinions.\"}", "{\"summary\": \"This paper proposed a training-free framework for generating long-horizon visual instructions with logic and attribute self-reflection. It drafts consistent images for each step, using historical prompts and visual memory, and corrects errors with image editing tools. Experiments show it produces more comprehensive visual instructions than baseline methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The training-free framework and self-reflection mechanism of LIGER provide a novel approach to visual instruction generation for long-horizon tasks.\\n2. The writing is clear and well-structured, making the concepts easy to understand.\\n3. The manually curated benchmark tests effectively demonstrate the advantages of the images generated by LIGER in terms of comprehensibility.\", \"weaknesses\": \"1. In Automatic evaluation, the authors didn't evaluate the quality of the generated images, only assessing alignment and consistency.\\n2. In this paper, the benchmark's narrow focus on cooking might not capture the full spectrum of complexities and variations present in long-horizon tasks across different industries or activities. It would be beneficial for the authors to provide more details on the types of tasks included in the benchmark.\", \"questions\": \"1. How does LIGER's self-reflection mechanism ensure that the identification and correction of errors in images are accurate and error-free? Is there a possibility of over-correction or failure to recognize certain types of errors? How you balance Over-consistent and Identity inconsistent?\\n2. How much time consumption does LIGER introduce?\\n3. Given the benchmark's focus on cooking tasks, how does LIGER address the generalization to other long-horizon tasks, and are there plans to broaden the benchmark's scope? Like engineering and sports?\\n4. Is there a specific template or structure that the benchmark follows for the steps involved in the tasks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you so much for the positive feedback!\"}", "{\"comment\": \"Thank you for the detailed response and additional experiments. They provide helpful clarification, I will keep my positive rating.\\n\\nI wish the authors the best with this work moving forward.\"}", "{\"comment\": \"Thank you for your response. Most of my concerns are resolved, so I've decided to raise my score in appreciation to 6.\"}", "{\"metareview\": \"The paper presents LIGER, a novel training-free framework for generating long-horizon visual instructions which utilizes historical prompts and visual memory. The framework is designed to improve the consistency and accuracy of instructional outputs. According to the reviewers (UNJf, nhsY), LIGER shows promising results on a new benchmark dataset, with particular emphasis on cooking tasks.\\n\\nThe strengths of the paper include the innovative training-free approach and the self-reflection mechanism of LIGER, which are recognized as significant contributions to the field (UNJf, P3KH). Additionally, the paper is well-structured and clearly written, and the introduction of a new benchmark dataset is commended for providing a solid foundation for evaluation (P3KH, UNJf, nhsY, fPFq).\\n\\nHowever, the weaknesses lie in the limited scope of the dataset, focusing primarily on cooking scenarios which may affect the generalizability of the findings (P3KH, UNJf). The reliance on proprietary models like GPT-4 also raises concerns about accessibility and replicability (nhsY, fPFq). A notable oversight is the absence of a detailed error analysis and robustness testing (nhsY, UNJf).\\n\\nThe decision to accept rests on the novelty and effectiveness of LIGER, despite the raised issues. The authors' thorough responses during the rebuttal period have been considered to balance out the concerns about generality and reliance on closed-source models.\", \"additional_comments_on_reviewer_discussion\": \"During the review discussion, concerns were raised about the depth of analysis of alternative methods and baseline comparisons (fPFq), the narrow focus of the benchmark dataset (P3KH, UNJf), and the lack of error analysis (nhsY).\\n\\nThe authors addressed these by providing additional comparisons with alternative methods, expanding upon their benchmark dataset to improve its scope (fPFq, P3KH), and including error analysis and robustness tests in their rebuttal. These additions have alleviated some concerns, though the performance impact of using open-source models instead of proprietary ones is still unclear (nhsY).\\n\\nIn weighing these concerns, the authors' comprehensive rebuttal and supplementary experiments have been taken into account. The originality and responsiveness to feedback have been pivotal in the decision-making process.\"}", "{\"comment\": \"Thank you for the clarifications. I will raise my score to 6. Good luck!\\n\\nBest,\"}", "{\"title\": \"A Kind Reminder Regarding Our Rebuttal.\", \"comment\": \"Dear Reviewer fPFq,\\n\\nThe discussion period will close in eight hours. We understand that you may be busy, but we are eager to hear back from you at your earliest convenience. \\n\\nWe highly value your feedback and thank you again for your patience and expert opinion during the review process.\\n\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Gentle Reminder Regarding our Rebuttal\", \"comment\": \"We sincerely thank reviewers for their professional and detailed reviews.\\n\\nAs the discussion period deadline approaches, we are looking forward to receiving feedback on our rebuttal, so we can engage in further discussions and refine our work.\\n\\nWe understand the reviewers may be under a large workload, and we are grateful for their time and patience.\\n\\nBest regards, authors.\"}", "{\"summary\": \"The paper proposes a method for visual story generation that is completely training-free but consists of many steps. The main parts of the method include keeping a historical prompt and a visual memory for consistency and self-reflection for refinement. The method shows improved results over the previous methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. the paper is written well.\\n2. the method section reads clearly. \\n3. the paper shows improved results over the baselines.\", \"weaknesses\": \"1. the method has a lot of moving parts. i would have liked to see some error analysis regarding how does the approach work if one of the components makes a mistake. for example, what happens if gpt-4o misses some details?\\n2. the current work is overly reliant on the closed-source gpt-4o. there are also plenty of open source models available. some ablation on using open source vlms could be useful and beneficial for the community. \\n3. error bars are not reported. \\n4. how expensive is this approach. if the sequence length is too long of the tasks. would that mean that we will need to store a lot of 'visual memory'? some discussion on this could have been helpful.\", \"questions\": \"i have some serious concerns about the motivation of this task. what can be the differences of this particular task with semantically consistent video generation for a scene?\\n\\nplease also look at the weaknesses. overall i like the paper. but if these concerns are addressed i can update my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank you for your appreciation!\"}", "{\"summary\": \"This manuscript introduces an innovative task named\\\"Visual Instructions for Long-Horizon\\\" which aims to generate a series of continuously aligned visual images corresponding to textual instructions. The manuscript proposes four self-reflection methods that leverage both visual and historical prompts. To prevent cumulative deviation and help generate along the correct trajectory, an approach termed Inversion-Based Visual Memory Calibration is proposed. The proposed method is noteworthy for its approach to addressing attribute errors and inconsistencies by utilizing existing mLLM tools.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe proposed task and solution are indeed novel. Ablation experiments validate the contributions of the various proposed modules, including the visual prompt, historical prompt, self-reflection mechanism, and inversion-based memory calibration.\\n2.\\tNumerous qualitative experiments demonstrate that the proposed tool-based self-reflection method maintains alignment with textual instructions, ensuring that the generated images adhere to contextual logic, thereby validating the overall efficacy of the approach.\", \"weaknesses\": [\"1.\\tThis manuscript would benefit from a more comprehensive comparative analysis. I recommend including comparisons with additional train-free methods, such as [1] and [2], as well as approaches that share similar concepts, like [3], and non-tool-based methods, such as [4].\", \"2.\\tFurthermore, not all metrics proposed by the authors are original, and the use of multimodal large language model (LLM) reasoning to evaluate images also not original. It is advisable to revise the relevant statements in the contributions section to reflect this accurately.\", \"3.\\tWhile the research method is innovative, the reliance on several pre-designed strategies and external models for image refinement may not be particularly efficient in practical applications. This could be seen as a limitation of the manuscript and the authors are encouraged to provide an appropriate discussion of this issue.\", \"[1] Coherent Zero-Shot Visual Instruction Generation\", \"[2] Training-Free Consistent Text-to-Image Generation\", \"[3] Consistent self-attention for long-range image and video generation\", \"[4] StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation\"], \"questions\": \"Why is there no evaluation of the effect of \\u201c+self-reflection\\u201c alone in the ablation experiments? It appears to be directly combined with memory calibration in the final method without independent verification.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We thank the valuable suggestion and appreciation from the reviewer. Below we provide a point-to-point reply.\\n***\\n\\n> **W1:** *Related work comparison*.\\n\\nWe have added a qualitative comparison between ConsiStory, Storydiffusion, and the proposed LIGER in Appendix A2 of the manuscript pdf file. We have included the papers you mentioned in the related work section and discussed the differences. \\n\\nSpecifically, Long-horizon Visual instruction generation requires not only object consistency but also scene diversity between images. As shown in the figure, ConsiStory, and Story diffusion **lack scene diversity**, also the **object state is static**. Moreover, the two methods also need to **identify the subject concept expression manually**, which is not required by LIGER. \\nIn the image example, pork shows an appearance change from raw to well-done. The reason is that previous methods only maintain consistency through sharing hidden states in the UNet, while LIGER leverages the reasoning ability of large language models. \\n\\n> **W2:** *Statements on evaluation metrics*.\\n\\nWe have modified the statements in the contribution section of the introduction. We clarify that the **testing goals of these metrics are different**. \\n\\n(1) First, the CLIP-Score is used to test the alignment between the generated images and human perceptions. The human perceptions are represented by hand-written text annotations. \\n\\n(2) The DINO ratio score is designed to evaluate both the coherence between related steps and the distinction between unrelated steps by measuring image similarity and divergence respectively. \\n\\n(3) Moreover, the BERT score evaluates image illustrativeness through a modality transfer test which is innovative and different from previous metrics. \\n\\nIn terms of the GPT evaluation, we further evaluate the generated instructions from the perspective of Large language model comprehension. Overall, the evaluation metrics reveal different challenges and provide a thorough comparison.\\n\\n> **W3:** *Inference efficiency*\\n\\nIndeed, incorporating various tools inevitably raises inference time. Our speed test reveals that vanilla SDXL can generate images for a 10-step task in about 60 seconds while LIGER takes about 120 seconds. Despite LIGER being slower than the vanilla diffusion model, it exhibits enhanced generation quality. Moreover, LIGER is still significantly faster than the human design process, which has real-world potential to generate comprehensive images. We include these discussions in the limitation section of the appendix. \\n\\n> **Q1:** *Self-reflection mechanism ablation.*\\n\\nWe conduct further ablation study of SDXL+self-reflection (namely SDXL+R), which is essentially LIGER without visual memory and history prompt. Quantitative results are shown in Table 4 in the appendix of the manuscript. A performance improvement over vanilla SDXL is observed, demonstrating its effectiveness.\\n\\nTo dive into the effectiveness of the self-reflection mechanism, we also test the performance of SDXL+V+R and SDXL+H+R. However, due to the time limit and budget, we provide the result on a 100-task subset in Table 5 of the manuscript. Results show that the self-reflection mechanism consistently improves performance.\\n***\\nWe hope the response addresses the concern. We are willing to have discussions if there are still questions.\"}", "{\"title\": \"Rebuttal by Authors (1/2)\", \"comment\": \"We thank the reviewer for the question, here is a detailed discussion:\\n***\\n> **W1:** *Comparison fairness*.\\n\\nThe proposed dataset is the first dataset curated for the long-horizon visual instruction generation task. The textual tasks are sourced from the widely-used benchmarks and **LLM creates the steps without any specific template, ensuring fairness**. The automatically curated tasks also assess the robustness of different methods given diverse instructions. Human annotators are only responsible for writing their comprehension toward these automatically generated instructions. Furthermore, **all methods are evaluated using the same metrics**. Overall, the comparison conducted on this dataset is fair and reasonable.\\n\\nThe dataset highlights the complexities of this task, i.e. generating images comprehensible with **object state changes, identity consistency, and scene diversity**. These features have not been examined in existing benchmarks. The proposed dataset includes a variety of tasks of different lengths and randomly generated textual instructions, providing a comprehensive test of different methods.\\n\\nThe proposed framework LIGER is also applicable to other datasets. We qualitatively show the generation results on the recipeplan dataset against baseline methods in Figure 11 of the appendix. For the shorter tasks, LIGER still generates illustrative instructions.\\n\\n> **W2:** *Small dataset restricted to cooking domain.*\", \"we_chose_cooking_scenarios_because_they_encapsulate_the_key_challenges_of_visual_instruction_generation\": \"**(1) Multiple scene setting:** Cooking involves various stages, e.g. preparation, cooking, and serving, each occurring in different scenes. Good instruction should have various scenes under different stages, showing scene transitions.\\n\\n**(2) Various object and attribute status:** The cooking process usually has various ingredients in different states. As the task progresses, the image should reflect these state changes. For instance, when baking chicken wings, the chicken should be initially raw, while end up cooked.\\n\\n**(3) Logically coherent between steps:** Cooking scenarios require logical coherent between consecutive steps. Many incremental instructions like \\\"add salt and pepper\\\" should follow the previous step image. On the other hand, there are also logical independent steps that should maintain visual divergence. For example, after preparing vegetables, we need to heat oil in a pan, these two steps should not look alike. This requirement is particularly important in long-horizon tasks.\\n\\nHaving the above challenges, evaluating the cooking scenario effectively reflects the ability of different methods. Current methods also mainly focus on the cooking domain, yet they struggle with the above challenges. \\n\\nAnother reason is that **many tasks are not suitable for image instructions** such as \\\"How to spend your leisure time?\\\" or \\\"How to become a professor\\\". In contrast, visual instructions for cooking scenarios are easier to understand. For instance, merely using the text \\\"wait for a steak well-done\\\", users can not tell whether the steak is well-cooked. Instead, showing a picture of a well-cooked steak clarifies the situation.\\n\\nLIGER is also suitable for tasks in other daily tasks. Additional qualitative results are shown in Figure 10 of the appendix. Since LIGER is a training-free framework, the generation quality depends on the diffusion model. Some detailed fine actions are difficult to generate due to the limited ability of the pre-trained diffusion model.\\n\\nThe proposed dataset contains 569 long-horizon tasks, involving generating **over 5500 images in one trial**, which is substantial. Curating the dataset also cost some time since the annotators had to annotate over these 5500 instructions.\"}", "{\"summary\": \"This paper focuses on long-horizon text-to-image generation tasks and proposes a train-free framework. Long-horizon text-to-image tasks face two main challenges: temporal consistency and the accumulation of attribute errors. To address this, the work injects historical text descriptions and visual tokens into a Diffusion Model, then leverages GPT-4 and the segmentation large model LISA to detect and locate errors in the images. Image editing tools are subsequently employed to correct these errors. Meanwhile, the framework uses DDIM inversion to obtain the features of the edited images. Additionally, the paper introduces an evaluation dataset of over 500 samples to assess the effectiveness of long-horizon image generation.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-structured and smoothly written, making it easy for readers to understand.\", \"The figures and tables in the paper are well-designed, making them easy to understand.\", \"The motivation is well-defined. To address the temporal consistency issue in long-horizon visual instruction generation, it proposes injecting historical text and visual information into the diffusion model. To tackle the attribute error problem, it introduces a method of using MLLMs to detect errors and then calling an agent to correct them, which demonstrates a certain level of novelty.\"], \"weaknesses\": [\"This paper only validates its effectiveness on a self-proposed evaluation dataset, which is somewhat unfair. It is recommended to find more suitable and fair evaluation datasets for verification.\", \"The evaluation dataset used in this paper is limited to cooking scenarios, lacks generality, and is relatively small, with only about 500 samples.\"], \"questions\": [\"In Section 3.1, \\\"Visual Memory Sharing\\\" uses an attention module to inject historical visual information into the Diffusion model. However, the paper claims that the method is train-free, so where are the weights for this attention module loaded from? If this attention module is not trained on the specific task, can it really be used in a zero-shot manner? I have doubts about its performance.\", \"The symbol $O_i$ in Eq.3 may lack an explanation, which might leave readers wondering how $O_i$ is used in the Diffusion model.\", \"In Section 3.2, it is recommended to briefly explain how the image editing tools mentioned (such as DragonDiffusion, SD inpainting Rombach, LAMA, etc.) are used in this method.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer fPFq,\\n\\nAs the deadline of the discussion period was extended, we conducted the additional ablation experiment on the whole dataset.\\n\\nThe results of the whole 569 tasks are shown in the table below.\\n\\n| Method | CLIP-Score $\\\\uparrow$ | DINO-Score $\\\\downarrow$ | BERT-Score $\\\\uparrow$|\\n| -------- | --- | --- | --- | \\n| SDXL | 2.5837 | 0.8516 | 0.8699 |\\n| SDXL+V | 2.6251 | 0.8239 | 0.8719|\\n| SDXL+H | 2.6842 | 0.8224 | 0.8707|\\n| SDXL+R | 2.7270 | 0.7346 | 0.8732|\\n| SDXL+H+V | 2.7168 | 0.7459 | 0.8721|\\n| SDXL+V+R | 2.7428 | 0.7053 | 0.8734|\\n| SDXL+H+R | 2.7440 | 0.6653 | 0.8740 |\\n| LIGER | 2.7555 | 0.6338 | 0.8743|\\n\\nIf you have any other questions or concerns, we are always here to do our best to answer them. We look forward to continuing our discussion with you.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We thank the reviewer for the valuable questions. A point-to-point reply is provided below:\\n***\\n> **W1:** Image quality evaluation.\\n\\nThe core evaluation criterion for this task is whether the generated images assist human comprehension, while the image quality is indeed important. However, it is hard to evaluate using conventional metrics since there are no ground truth images. \\n\\nTo evaluate the image quality, we add GPT evaluation and user study. Specifically, we prompt the GPT4o model to rate the quality of an image on a scale of 1 to 5 (5 is the best) and test the whole dataset. For the user study, we recruited 5 users, let them view 50 images from each method, and select the best image from the three methods. The win rate and GPT score are reported in Table 9 of the appendix. Note that LIGER is a training-free method, meaning the image quality also depends on the pre-trained tools. \\n\\n> **W2:** Dataset scenarios.\\n\\nThe proposed dataset mainly focuses on the cooking domain but also involves daily housework. We showcase some qualitative results on other scopes in Figure 10. The reason for using the cooking scenario is that:\\n\\nFirstly, cooking scenarios present significant challenges representative of long-horizon tasks. Long-horizon tasks usually involve **multiple sub-procedures**, requiring the instruction images to show **scene diversity in different stages**. The cooking procedure reflects this challenge as it usually involves various stages like preparing, cooking, serving, etc. Moreover, the **object attribute in long-horizon tasks changes as the task progresses**, which is also presented in cooking scenarios. For instance, instructions for cooking steak should start with a raw steak and end up cooked. Therefore, testing in the cooking domain is meaningful and representative. Current methods struggle to deal with these challenges in long-horizon tasks.\\n\\nSecondly, we acknowledge some limitations. Since LIGER is training-free, the generation ability depends on the pre-trained diffusion model. We find the generation quality of fine-grained actions and part of a complex structure unsatisfying.\\n\\n> **Q1:** Error cases.\\n\\nThe identification and correction are not error-free. There could also be cases of over-correction, though rarely. We add the bad case discussion in Figure 9 of the appendix. To mitigate the errors, we allow the error detector to finish its trial before the referee agent decides whether the edited image is better than the draft image. If the edited image is not ideal, we pick the draft image as the final output. \\n\\nOver-consistent and identity inconsistent are different levels of errors, over-consistent is a global level error where the images of two steps should not look alike. In contrast, identity inconsistency is an object-level error, meaning the object in two steps should look the same, but not. Over-consistent is a more noticeable error, so the error detector **first assesses over-consistent error**. If an over-consistent error exists, the image is directly regenerated. If no over-consistent error is found, then the error detector starts to inspect inconsistent errors.\\n\\n> **Q2:** Inference time consumption\\n\\nWe conducted a speed test over 50 randomly selected tasks using a locally deployed multi-modal large language model. LIGER takes around 120 seconds to generate instructions for a 10-step task while the frozen stable diffusion model takes around 60 seconds on a single A100 GPU. \\n\\n> **Q3:** Dataset scope.\\n\\nLIGER is a high-level method for long-horizon tasks. It addresses common issues in long-horizon tasks, making it widely applicable. Cooking scenarios are representative since they involve the challenges aforementioned. We also show some additional results in Figure 10 of the appendix.\\n\\nIn the future, we plan to broaden the benchmark and explore more scenarios. However, we acknowledge certain challenges when testing on other scopes. Firstly, there are tasks hard to illustrate by images like \\\"How to teach your child?\\\" Second, since LIGER is a training-free manner, the generation quality depends on the pre-trained diffusion model. The pre-trained diffusion models struggle to generate fine-grained actions like sewing the knit or building the house using different tools. Despite these challenges, we are optimistic that LIGER will benefit from future advancements.\\n\\n> **Q4:** Step generation template.\\n\\nWe do not have a specific template for the GPT model to generate steps. The motivation is to test whether LIGER is robust toward instructions in different styles.\\n***\\nWe hope the above discussions address the concern and are willing to have more discussions.\"}", "{\"comment\": \"We sincerely thank you for your appreciation!\"}", "{\"title\": \"Manuscript Update Information\", \"comment\": \"We thank the reviewers for their insightful and professional suggestions. Based on the comments, we updated the manuscript in the following sections:\\n\\n- **Quantitative experiments** (Appendix A.1.)\\n\\n 1. Additional ablation on the self-reflection mechanism in Table 4 and Table 5.\\n\\n 2. Ablation on MLLM model in Table 6.\\n\\n 3. Variance test in Table 7. and Table 8.\\n\\n 4. Image quality evaluation in Table 9.\\n\\n- **Qualitative experiments** (Appendix A.2.)\\n\\n 1. Comparison with Consistory and StoryDiffusion in Figure 8.\\n\\n 2. Error case analysis in Figure 9.\\n\\n 3. Qualitative results on other scenarios in Figure 10.\\n\\n 4. Recipeplan dataset example in Figure 11.\\n\\n\\n- **Text Revision**\\n\\n 1. We modified the statement of evaluation metrics.\\n\\n 2. Adding an Explanation of how the output in Eq 3. is used. The inputs of the editing tools are clarified.\\n\\n 3. Limitation discussions are included in the appendix A.3.\\n\\nAll the modifications are marked blue.\"}", "{\"title\": \"Rebuttal by Authors (2/2)\", \"comment\": \"> **Q1:** *Detail of the visual memory mechanism.*\\n\\nVisual memory sharing is a training-free mechanism that utilizes the visual feature in the previous step to guide the current step generation. Specifically, during the generation of each step, we inject the previous step image visual feature tokens into the **attention operation of the pre-trained UNet**, making the network aware of the previous step. Note that the attention mechanism is inherently designed in the latent diffusion models. We simply load the weight of the pre-trained SDXL model. This technique is also used in other works for identity-keeping and has been proven to be effective [1][2]. We also conducted qualitative ablation and quantitative ablation in Table 2 and Figure 6 of the manuscript.\\n\\n> **Q2:** *$O_i$ in Eq3*\\n\\n$O_i$ in Eq 3 is used in the UNet $U$ in the future deeper layers. We further clarify it in the paper.\\n\\n> **Q3:** *Usage of the tools*\\n\\nThe input of DragonDiffusion involves two masks in the previous step image and the current step image respectively. The masks are generated by the LISA model using the text generated by the error detector. SD inpainting takes the new object description generated by the error detector and a mask of the wrong objects as input. LAMA takes a mask of the redundant object as input. We will further clarify this in the paper. \\n\\n[1] Training-Free Consistent Text-to-Image Generation\\n\\n[2] Consistent self-attention for long-range image and video generation\\n***\\nWe hope the above discussions address your concern. We are looking forward to having more discussions about the paper.\"}" ] }
EdKSI2ijUY
LMRL Gym: Benchmarks for Multi-Turn Reinforcement Learning with Language Models
[ "Marwa Abdulhai", "Isadora White", "Charlie Victor Snell", "Charles Sun", "Joey Hong", "Yuexiang Zhai", "Kelvin Xu", "Sergey Levine" ]
Large language models (LLMs) provide excellent text-generation capabilities, but standard prompting and generation methods generally do not lead to intentional or goal-directed agents and might necessitate considerable prompt tuning. Even the best current LLMs rarely ask clarifying questions, engage in explicit information gathering, or take actions that lead to better decisions after multiple turns. Reinforcement learning has the potential to leverage the powerful modeling capabilities of LLMs, as well as their internal representation of textual interactions, to create capable goal-directed language agents. This can enable intentional and temporally extended interactions, such as with humans, the emergence of complex skills such as persuasion, and long-horizon strategic behavior, such as in the context of games. Enabling this requires the community to develop reliable reinforcement learning algorithms for training LLMs. Developing such algorithms requires tasks that can gauge progress on algorithm design, provide accessible and reproducible evaluations for multi-turn interactions, and cover a range of task properties and challenges in improving reinforcement learning algorithms. Our paper introduces the LMRL-Gym benchmark for evaluating multi-turn RL for LLMs, together with an open-source research framework for getting started on multi-turn RL with offline value-based and online policy-based RL methods. Our benchmark consists of 3 Interactive Dialogue tasks and 5 RL Capability tests for a total of 8 tasks, which require multiple rounds of language interaction and cover a range of tasks in open-ended dialogue and text games.
[ "benchmarks", "LLMs", "RL" ]
Reject
https://openreview.net/pdf?id=EdKSI2ijUY
https://openreview.net/forum?id=EdKSI2ijUY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wZcEQKVkbL", "vmCU8CoTiq", "unOuC6dUA3", "uK2Dhmn1BU", "pCFEDt3M8V", "o5x3JrcGFv", "n1YMhKxqyW", "mn9GKvB2zc", "ciT0CJaacQ", "bB408ZhuSe", "QsrymNE5VA", "MmsBcOkfNb", "JrOorL2RZ6", "Hph5hmm1eT", "GvrJFqxbIu", "GiYLurBpSD", "GWI8bUoP0Q", "EhbIzv38Ua", "E04SCkPYxu", "Cxn0Y8Eouv", "BtW8QJys03", "B136wRgEp1", "5HSn2r2C3K" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1733284099372, 1733292348545, 1733285015402, 1733315036058, 1733315028194, 1733292976356, 1733294449034, 1733242216893, 1733247902268, 1730670131455, 1733266851369, 1737524302238, 1730536808415, 1733242247120, 1733292797690, 1733295420786, 1733210454503, 1730602228849, 1734601361546, 1733283475568, 1733247561943, 1730694705685, 1733315018857 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14185/Authors" ], [ "ICLR.cc/2025/Conference/Submission14185/Authors" ], [ "ICLR.cc/2025/Conference/Submission14185/Authors" ], [ "ICLR.cc/2025/Conference/Submission14185/Authors" ], [ "ICLR.cc/2025/Conference/Submission14185/Authors" ], [ "ICLR.cc/2025/Conference/Submission14185/Authors" ], [ "ICLR.cc/2025/Conference/Submission14185/Authors" ], [ "ICLR.cc/2025/Conference/Submission14185/Authors" ], [ "ICLR.cc/2025/Conference/Submission14185/Authors" ], [ "ICLR.cc/2025/Conference/Submission14185/Reviewer_j29V" ], [ "ICLR.cc/2025/Conference/Submission14185/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission14185/Reviewer_KbFz" ], [ "ICLR.cc/2025/Conference/Submission14185/Authors" ], [ "ICLR.cc/2025/Conference/Submission14185/Authors" ], [ "ICLR.cc/2025/Conference/Submission14185/Authors" ], [ "ICLR.cc/2025/Conference/Submission14185/Reviewer_9eFK" ], [ "ICLR.cc/2025/Conference/Submission14185/Reviewer_WagK" ], [ "ICLR.cc/2025/Conference/Submission14185/Area_Chair_3GsZ" ], [ "ICLR.cc/2025/Conference/Submission14185/Authors" ], [ "ICLR.cc/2025/Conference/Submission14185/Authors" ], [ "ICLR.cc/2025/Conference/Submission14185/Reviewer_9eFK" ], [ "ICLR.cc/2025/Conference/Submission14185/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer j29V\", \"comment\": \"We would like to clarify the methodology with which we generated our datasets to train our simulators, and how we ensured high quality and consistency. As shown in Figure 2, we train a simulator that serves as an \\u201coracle\\u201d for the task, and hence does not require any capabilities of strategic reasoning, but provides signals to help the agent model learn. For example, the role of the oracle in the Twenty Questions task is to provide objective yes/no answers to questions about the object, and in Guess My City, to provide more open ended information about a query on the city. OpenAI\\u2019s GPT-3.5 has been shown to be able to generate reasonable questions and answers when used out of the box, which is why we leveraged it to collect our initial dataset. We have provided prompts that we use to generate the data to train our oracle models in our Appendix, and snippets below to show our thought process to maintain high accuracy.\\n\\nThe method for collecting the dataset is as follows. For each conversation, we select uniformly at random from the above list the word that the oracle is answering question about. The oracle is an LLM (OpenAI\\u2019s GPT3.5) given the following prompt. In our prompts, we denote variables that we fill in with variable data with {{variable}}.\", \"prompt\": \"You are a question answering oracle. You will answer each question about an object with Yes or No. If the answer could be both, answer with the most typical scenario. Here\\u2019s a few examples:\", \"example_1\": \"\", \"object\": \"Computer\", \"question\": \"Does the object use electricity?\", \"answer\": \"Yes.\", \"explanation_of_answer\": \"Computers need electricity to function. [...]\"}", "{\"title\": \"Response to Reviewer WagK\", \"comment\": \"We thank the reviewer for their feedback and very helpful observations, we appreciate the clarity. We have addressed points raised in your review below:\\n\\n1. *\\\"L074 - L075\\\"*: We thank the reviewer for this clarification point and will be sure to make the modification in the paper. \\n2. *\\\"L086 - L089\\\" While some works have sought to apply RL for multi-turn tasks (Singh et al.,1999;Li et al.,2016; Shah et al.,2016;Kwan et al.,2022), particularly for goal-directed dialogue (Lewis et al.,2017; Verma et al.,2022), there has been comparatively little research on improving the underlying RL algorithms and very little head-to-head comparison on same sets of tasks.*\\n\\nWe would like to clarify that the lack of comparisons refers to the fact the RL algorithms have not been compared directly on the same suite of tasks before. So far, these papers have created their own benchmark while proposing an algorithm. For example, the ILQL paper [1] or the NLPO paper [2] both introduce separate tasks, and the paper that introduced the 20 Questions task only benchmarked with the PPO algorithm [3]. We seek to remedy this and provide a suite of tasks and algorithms that we can use to directly compare performance across a variety of RL LLM algorithms. This is a significant contribution because although some of the tasks have been considered before, algorithms to improve performance on these tasks have not been evaluated and directly compared. Moreover, we provide a codebase for training new algorithms in JAX directly on our task.\", \"works_cited\": \"1. Snell, Charlie, et al. \\\"Offline rl for natural language generation with implicit language q learning.\\\" arXiv preprint arXiv:2206.11871 (2022). \\n2. Ramamurthy, Rajkumar, et al. \\\"Is reinforcement learning (not) for natural language processing: Benchmarks, baselines, and building blocks for natural language policy optimization.\\\" arXiv preprint arXiv:2210.01241 (2022).\\n3. Zhang, Y., Lu, J., & Jaitly, N. (2024, August). Probing the multi-turn planning capabilities of LLMs via 20 question games. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 1495-1516).\"}", "{\"title\": \"Response to Reviewer j29V\", \"comment\": \"3. *My second concern is the choice of tasks for the RL capability component of the benchmark. Barring the Text-Nav and to a lesser extent Wordle settings, the tasks are regular reinforcement learning tasks that are presented in natural language but do not really test language understanding or use. While I recognize that these are intended to be unit-tests for various RL capabilities in language models, I do not have good intuition on how well algorithm success on these would generalize to multi-turn dialogue or tool use.*\\n\\nOur objective in creating this benchmark is to present tasks that apply RL algorithms for multi-turn tasks in the domain of goal-directed dialogue, where agents must learn from interaction with a conversation partner. However, to enable such a large undertaking, we require tasks that can first test capabilities of RL algorithms that are essential for multi-turn dialogue, including trajectory stitching, credit assignment, and dealing with complex language. Hence, we have designed five tasks as RL Capability Tests, which are text games designed to isolate specific capabilities of RL training as shown in Figure 4. As seen, these text-games do not test all of the capabilities of RL together, which is only possible through the dialogue-based tasks. Our benchmark includes tasks that involve free-form text generation and a longer turn length. We challenge the agents in our tasks to not only follow instructions and understand the world, but plan over long trajectories, generate complex text, trajectory stitch, and resolve partial observability. \\n\\nThe RL Capability tests which we have introduced in Figure 2 are ideal testbeds for testing multi-turn RL properties in language, because they are text-based versions of tasks where RL is known to excel. We design each of the tasks with specific properties and comparisons in mind. For example, for the Maze and Text-Nav we test both partially observed and fully observed versions to highlight the impact of partial observability. In addition, the Text-Nav task is very similar to the Maze task, but places more emphasis on realistic text. Additionally, we have also provided a symbolic version of the Maze task for better comparison with the text based version, and have explained our findings in the Appendix. \\n\\nRegarding the three dialogue tasks, they have been designed with increasing levels of difficulty, with twenty questions testing the ability of RL algorithms to perform information gathering, guess my city testing the ability to ask questions beyond yes/no, and the Car Dealer task to test more strategic decision making and persuasive capabilities of RL algorithms for LLMs. With respect to the Car Dealer task, we spent a considerable effort to ensure diversity in the responses of sellers, by providing different desired brands, features, classifications (i.e. car or truck), and budgets in our prompting to generate the datasets. \\n\\n4. *Does the model need to act in accordance with a particular seller archetype or were the three different types there to generate diverse data? Is success dependent on simply selling a car (at which point a degenerate strategy of selling a car for $0 would succeed) or are there other conditions that determine success or reward? How are these implemented?*\\n\\nThe model does not need to act in accordance with a particular seller archetype, and this was only used to generate diverse data. Regarding the reward function, the success at the task is the price of the car sold/bought at the very end of the conversation The reviewer is correct that there are no other metrics for the reward function. We did experiment with a more complicated version of the reward that included other features such as the MSRP of the car, and found that this did not achieve many gains in comparison to the simpler reward function, which we opted with instead.\"}", "{\"comment\": \"We hope these responses have helped answer the reviewers questions and that you consider raising your score :)\"}", "{\"comment\": \"We hope these responses have helped answer the reviewers questions and that you consider raising your score :)\"}", "{\"title\": \"Response to Reviewer WagK\", \"comment\": [\"[7] Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., & Cao, Y. (2022). React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629.\", \"Uses AlfWorld and Webshop, refer to [4,6]\", \"[8] Guo, X., Yu, M., Gao, Y., Gan, C., Campbell, M., & Chang, S. (2020). Interactive fiction game playing as multi-paragraph reading comprehension with reinforcement learning. arXiv preprint arXiv:2010.02386.\", \"Uses Jericho benchmark as well, see discussion in [3]\", \"[9] Yao, S., Rao, R., Hausknecht, M., & Narasimhan, K. (2020). Keep calm and explore: Language models for action generation in text-based games. arXiv preprint arXiv:2010.02903.\", \"Uses Jericho benchmark as well, see discussion in [3]\", \"[10] Ammanabrolu, P., Tien, E., Hausknecht, M., & Riedl, M. O. (2020). How to avoid being eaten by a grue: Structured exploration strategies for textual worlds. arXiv preprint arXiv:2006.07409.\", \"Emphasis on creating QA-dataset which is then used to later train downstream online RL (not offline RL)\", \"Text games similar to TextWorld, less emphasis on stochastic text/variation in textual responses\", \"Uses the Jericho Benchmark, see discussion in [3]\", \"[11] Singh, I., Singh, G., & Modi, A. (2021). Pre-trained language models as prior knowledge for playing text-based games. arXiv preprint arXiv:2107.08408.\", \"Proposed DBERT-DRNN\", \"Uses Jericho Benchmark, see discussion in [3]\", \"[12] Fan, A., Urbanek, J., Ringshia, P., Dinan, E., Qian, E., Karamcheti, S., ... & Weston, J. (2020, April). Generating interactive worlds with text. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 02, pp. 1693-1700).\", \"About generating text-game environments on the fly\", \"Not related to using RL for LLMs\", \"[13] Yuan, X., Fu, J., Cote, M. A., Tay, Y., Pal, C., & Trischler, A. (2019). Interactive machine comprehension with information seeking agents. arXiv preprint arXiv:1908.10449.\", \"Does not use a language model with RL, only uses an RL agent\", \"Limited set of actions - previous, next, crtl+F, and stop\", \"**Related Work on Interactive Dialogue**:\", \"[14] De Bruyn, M., Lotfi, E., Buhmann, J., & Daelemans, W. (2022, December). 20Q: Overlap-Free World Knowledge Benchmark for Language Models. In Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) (pp. 494-508).\", \"20Q is the game of twenty questions just like our benchmark\", \"Does not provide an interactive evaluation, and it is only based on F1 score. This is critical for evaluating RL as it is important not only to reproduce the data, but to perform well in live interaction\", \"[15] De Bruyn, M., Lotfi, E., Buhmann, J., & Daelemans, W. (2022, December). Is it smaller than a tennis ball? language models play the game of twenty questions. In Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (pp. 80-90).\", \"GPT-3 plays 20 questions interactively\", \"Dataset of 2,000 questions - we have over 36k questions in our dataset\", \"Did not train RL on this task\", \"**On Offline RL for LLMs**:\", \"We would like to note that most of the citations from the reviewers are focused on online RL. Our benchmark focuses on providing an optimal testbed for both offline RL and online RL, by providing large datasets for training offline RL algorithms for LLMs, simulators for online RL training and offline evaluation, and several offline RL implementations including MC Returns, Filtered BC, and ILQL. We created the Car Dealer task to address the issues in [20] which uses the Craigslist dataset, such as instabilities seen when training offline and online RL algorithms on human datasets. Our Car-Dealer task was created with inspiration from this task, but with modifications to fix the issues we have seen with in practice, including dataset diversity induced by different car specifications and deterministic strategies induced by three personalities for the buyer and sellers that form natural agreement with one another. [16-19] list a series of related works in offline RL for LLMs, primarily focusing on either one task or one algorithm. Our work expands upon these works and provides a suite of both text game and dialog tasks.\", \"[16] Kumar, Aviral, et al. \\\"When should we prefer offline reinforcement learning over behavioral cloning?.\\\" arXiv preprint arXiv:2204.05618 (2022).\", \"[17] Prudencio, Rafael Figueiredo, Marcos ROA Maximo, and Esther Luna Colombini. \\\"A survey on offline reinforcement learning: Taxonomy, review, and open problems.\\\" IEEE Transactions on Neural Networks and Learning Systems (2023).\", \"[18] Snell, C., Kostrikov, I., Su, Y., Yang, M., & Levine, S. (2022). Offline rl for natural language generation with implicit language q learning. arXiv preprint arXiv:2206.11871.\", \"[19] Verma, S., Fu, J., Yang, M., & Levine, S. (2022). Chai: A chatbot ai for task-oriented dialogue with offline reinforcement learning. arXiv preprint arXiv:2204.08426.\"]}", "{\"title\": \"Response to Reviewer WagK\", \"comment\": \"4. *\\\"L100 \\u2013 L103: Observation: The human study with 40 participants and 18 natural text examples does not statistically justify that the simulation results reflect human norms of conversation. What is the basis of the simulation results reflecting human norms of conversation on a very small sample size of participants and likewise very small number of examples?*\\n\\nWe acknowledge that the study is limited in scope and size. However, we found that the 18 examples that we selected to be representative of the three dialog tasks. Additionally, this is not the only metric we have used to evaluate our simulator. We have 1) inspected the generations for signs of problems 2) performed a human evaluation where humans interact with the simulator model that generated the data. If humans are able to successfully interact with the models, it is a clear signal that our data is also natural and contains the desired properties. Additionally, we have 3) found better performance from our algorithms compared with the BC models, signaling that the data the simulator was trained on is providing a useful signal for the improvement of RL algorithms. Lastly, we have 4) conducted a study as recommended by Reviewer j29V on the self-consistency of the LLM oracle, by taking the same sample of conversations used in our human evaluation in Appendix A and prompting an LLM (specifically gpt-4o-mini) as to whether the oracle\\u2019s answers to questions are consistent with the object they have in mind. We do this for all three tasks and provide our results in their response. Please refer to it for more details. \\n\\n5. *\\\"L105: How do you define datasets that are sufficiently difficult and complex to gauge? Is there any metric or any qualitative decision making? The phrasing \\\"sufficiently difficult and complex\\\" needs to be justified.\\\"*\\n\\nAs we discuss in Section 4.1, there are certain capabilities that we want to assess with RL algorithms, including strategic decision making, complex language, credit assignment, partial observability, and trajectory stitching. To assess credit assignment and trajectory stitching capabilities in offline RL, we need datasets that are sufficiently complex and diverse. Current benchmarks that assess RL performance in text-games include a large suite of text games as shown above, but insufficient data to truly assess the capabilities of offline RL. \\n\\n6. *\\\"L117 - L118: Observation: What other baseline methods? It should be mentioned in the Appendix at least.\\\"*\\n\\nAs noted in Table 2, we evaluate the methods of BC, %BC, MC Return, ILQL, Online PPO, Online % BC, GPT-4. We discuss the methods in detail in Section 5. We will also revise this sentence to include a reference to the other baseline methods used as per your suggestion. \\n\\n7. *\\\"L129 \\u2013 L130: Observation: If other research papers have already proposed text games for evaluating language-based agents and interactive dialogue, please justify why this paper using RL algorithms for such tasks is a novel or a major contribution. Is there any engineering benefit? Please share that as other papers have covered this direction of research.\\\"*\\n\\nPlease refer to our response to Question 3. \\n\\n8.. *\\\"L205 \\u2013 L206: W Observation: Please correct Grammatical errors like \\\"are shown\\\" should be \\\"as shown\\\". Please note that clicking Figure 4 leads to Figure 1. The source tex file needs to be corrected. Also please mention that Figure 4 is in Appendix B.\\\"*\\n\\nThank you for catching this, we will fix this in our revision!\\n\\n9. *\\\"L321-L322: Observation: Please revise the sentence construction. The paper needs edits and revisions before publication.\\\" \\\"L441-L442: Observation: Correction of the phrase \\u2018is enable\\u2019 to \\u2018is to enable\\u2019 should be done.\\\"*\\n\\nWe have fixed these sentences in our revision and have proofread the paper further. We thank the reviewer for catching this.\"}", "{\"title\": \"Response to Reviewer 9eFK\", \"comment\": \"We apologize for the delay in our response, and we thank the reviewer for their feedback. We've addressed points raised in your review by: (1) including several examples of failures of the models (2) explaining our user study on naturalness of conversation and explaining how we emphasized diversity in task dialog (3) providing further clarification on capabilities of tasks (4) referencing works that have used the tasks in our paper as well as our repository for research in RL and LLMs (5) discussing ethical implications of our work (6) explaining why open-sourcing is important.\\n\\n1. *\\\"It would be interesting to see a few more examples (qualitative would be fine) of some of the observed failure modes, and some further analysis on where and why specific methods fail.\\\"*\\n\\nWe have provided two examples of some observed failure modes for the tasks, and will include several others in the Appendix as per your request.\\n\\n*********************************************\", \"example_1\": \"Twenty Questions\\nFor this task, the question that is asked by the model might randomly be related to the object (i.e. asking if it is something you can find in a toolbox) However, this does not happen all of the time. For example:\", \"word\": \"Wrench\", \"q\": \"Is the object a nail? No.\", \"correct\": \"False\\n******************************************************\", \"example_2\": \"Car Dealer\\n\\nFor this task, the seller model sometimes forgets that they must convince the buyer to buy a car.\", \"agent\": \"You're welcome! Have a great day!\\n*********\", \"buyer\": \"Will do. Thanks again!\"}", "{\"title\": \"Response to Reviewer 9eFK\", \"comment\": \"4. *\\\"In the impact statement, you mention dual-use implications of your work including \\\"persuasion, manipulation and addictive engagement of users at a large scale\\\". You then mention that you \\\"have designed [y]our datasets and reward functions such that [they] prioritize fairness and human-aligned outcomes\\\". Can you please elaborate which design steps you have taken to achieve this prioritization?\\\"*\\n\\nWe would like to clarify that our tasks, specifically those relating to dialog, have reward functions that prioritize fairness and have been designed to be fairly intuitive. Tasks such as Twenty Questions and Guess My City have reward functions that are very neutral in nature such that you receive positive reward if you guess the name of the object or city. However for the Car Dealer task, the seller agent could potentially be deceptive or nefarious towards the buyer agent. In order to avoid unexpected behaviors, we have prompted both the seller and buyer agents with their respective strategies in order to generate the dataset (i.e. the buyer likes discounts), and have designed the reward function to be very straightforward (the price the car was bought for) and reflective of the outcome for both the buyer and the seller. We would like to clarify that our statement regarding dual-use implications of the work is an acknowledgement that there are unintended consequences from dialog generated with LLMs, including being persuasive, manipulative, etc. These risks would also be present in data that was collected from humans. We believe that finetuning LLMs with RL objectives can minimize such risks, and that research into RL/LLMs and such a benchmark can help us train LLMs that are more aligned with humans. We hope that our work at the intersection of RL and LLMs is one step in that direction. As per your comment, we will clarify our ethics section further with these points.\\n\\n5. *You also express the intent to make public the code, datasets, hyperparameters as well as training procedure. What makes you confident that you sufficiently mitigate the stated implications and risks such that it is safe to publicly release the benchmark and open-source framework, instead of pursuing a different path such as making these more sensitive aspects of the results available only to trusted audiences (ie. known and trusted researchers, US AI Safety Institute, etc)?*\\n\\nWe believe that by making the code, datasets, hyperparameters, etc. available, we provide a toolkit for researchers and practitioners to get started with multi-turn RL for LLMs (focusing on both online & offline RL). Specifically, we have trained models with fewer parameters specifically to allow for further development of algorithms by everyone regardless of access to large compute and resources, further emphasizing the importance of equity in opportunity to do research in RL and LLMs.\"}", "{\"summary\": \"The authors propose the LMRL-Gym benchmark, a collection of tasks and an open-source framework inspired by the lack of standardized multi-turn language-based tasks to evaluate reinforcement learning algorithms on. The benchmark consists of two types of tasks: three \\\"interactive dialogue\\\" tasks involving dialogue partners simulated by finetuned language models that stress information seeking behavior and persuasion and five \\\"RL capability\\\" tasks that are intended to test general RL challenges such as credit assignment and trajectory stitching. Each task provides offline data by suboptimal policies to perform offline RL with as well simulators to conduct online RL on. The authors benchmark various behavior cloning, offline RL and online RL algorithms on all proposed tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) The paper does address an important gap in the current literature. As the authors state, most work applying reinforcement learning on language models centers on single turn interactions while work on multi-turn interactions often requires humans in the loop, which is expensive, slows down iteration and is challenging to replicate. The proposed collection of tasks, while synthetic and inspired by already existing scenarios, can therefore act as a useful test bed for reinforcement learning algorithms for multi-turn language-based tasks.\\n2) I also appreciate the inclusion of offline data from sub-optimal policies, allowing for the development of both offline and online RL algorithms.\", \"weaknesses\": \"1) My main concern, and my reason for giving a 2 on soundness, is whether the human evaluation on Appendix A is sufficient to show the correctness of the LLM simulator for the interactive dialogue tasks. There is no provided definition of \\\"naturalness\\\" and also no examples of the instructions given to the annotators. As a result, it is unclear whether the annotators were focused, for instance, on fluency or whether the simulator was accurate.\\n\\nIt would help, for instance, to conduct a separate experiment on the self-consistency of the LLM oracle. For the information seeking tasks, for example, this can involve taking a random sample of conversations and checking, either via human annotation or by prompting an LLM, if the oracle's answers to questions are consistent with the object they have in mind. \\n\\n2) My second concern is the choice of tasks for the RL capability component of the benchmark. Barring the Text-Nav and to a lesser extent Wordle settings, the tasks are regular reinforcement learning tasks that are presented in natural language but do not really test language understanding or use. While I recognize that these are intended to be unit-tests for various RL capabilities in language models, I do not have good intuition on how well algorithm success on these would generalize to multi-turn dialogue or tool use.\", \"questions\": \"I would like some more detail on the Car Dealer setting. I checked the paper and the appendix but could not find details on the reward function or the success condition. Specifically:\\n1. Does the model need to act in accordance with a particular seller archetype or were the three different types there to generate diverse data?\\n2. Is success dependent on simply selling a car (at which point a degenerate strategy of selling a car for $0 would succeed) or are there other conditions that determine success or reward? How are these implemented?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer j29V\", \"comment\": \"We thank the reviewer for their feedback. We've addressed the main points raised in your review by: (1) further clarifying our human evaluation and how we validated our simulator (2) conducting a self-consistency study of the LLM oracle as requested (3) elaborating on our choice of tasks for LMRL Gym and how they extend to multi-turn dialog / tool use (4) answering clarifying questions on the Car Dealer Task.\\n\\n1. *My main concern, and my reason for giving a 2 on soundness, is whether the human evaluation on Appendix A is sufficient to show the correctness of the LLM simulator for the interactive dialogue tasks. There is no provided definition of \\\"naturalness\\\" and also no examples of the instructions given to the annotators. As a result, it is unclear whether the annotators were focused, for instance, on fluency or whether the simulator was accurate.*\\n\\nWe thank the reviewer for their question. We would like to further clarify how we validated our simulator. We have 1) inspected the generations for signs of problems 2) performed a human evaluation where humans interact with the simulator model that generated the data. If humans are able to successfully interact with the models, it is a clear signal that our data is also natural and contains the desired properties. Additionally, we have found better performance from our algorithms compared with the BC models, signaling that the data the simulator was trained on is providing a useful signal for the improvement of RL algorithms.\\n\\nRegarding human evaluation and providing a definition of naturalness to the user, here are the instructions we have provided to users: \\u201cThank you for participating in our conversation naturalness rating survey. Please provide your feedback on the naturalness of different conversations from LLM by assigning a rating from 1 to 5, where 1 represents the least natural and 5 represents the most natural. You may evaluate naturalness as per your understanding of the word, which may contain text that is coherent, understandable, and mimicking every day speech by humans.\\u201d \\n\\nAdditionally, we would like to note the motivation behind why we have used simulators for LLM evaluation. Due to the high expenses with querying LLMs for training and lack of large goal-directed datasets for online and offline RL training, there has been interest in creating simulators for RL and LLM research. Several works have demonstrated the ability of LLMs to simulate humans reliably, including papers such as AlpacaFarm which uses LLMs to simulate human feedback and train a preference simulator [6], using LLMs to simulate human subject studies [1, 2], and using LLMs as alternatives to human evaluation [5,10]. Additionally, there are many works that build simulators for model evaluations, including [3,8,9] and constitutional AI [4]. Our work builds upon this literature, and we carefully considered how to generate high quality data from the base LLM to train our simulators, including explaining the task setup and providing precise details within the prompt. You can see the Appendix for the prompts we used for our simulators.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper highlights that current LLMs are trained to imitate golden responses rather than genuinely learning to reason and solve single-turn tasks. Additionally, there is a lack of benchmarking for multi-turn RL tasks, along with the absence of established evaluation protocols, which can be costly. To address this, the authors synthesize a benchmark that leverages the imitation capabilities of language models in conjunction with simulators, such as chess engines. They propose the LMRL-GYM benchmark, which comprises three interactive dialogue tasks and five RL capability tests, benchmarking existing RL methods, including offline methods like ILQL and online methods like PPO, among others.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper raises a significant question regarding the benchmarking of different RL algorithms in multi-turn scenarios and introduces the LMRL-GYM benchmark, which consists of several tasks designed for evaluation. It assesses a diverse range of RL algorithms while also providing a comprehensive evaluation framework.\", \"weaknesses\": \"1. The real-world tasks included in the benchmark are not sufficiently representative, as they only incorporate three tasks that focus on abilities such as persuasion and information gathering.\\n2. The dataset construction appears somewhat unconvincing. For the interactive dialogue tasks, authors initially use two GPT-3.5 models to generate the dataset and then train two FLAN-T5-XL models to imitate the guesser and oracle roles. Since these are relatively small models, the resulting dialogues may lack diversity and representativeness. The reliability of the benchmarking results for various RL algorithms raises concerns. While the authors conducted a user study to assess the naturalness of the synthesized datasets, I remain skeptical about the benchmark's overall naturalness.\\n3. The RL ability benchmark, which consists of five tasks, has a limited action space, deviating from real-world scenarios that utilize RL with much larger action spaces, such as step-wise scoring for tasks like math or code generation.\\n4. The experiments are conducted with small models; is the benchmark applicable to larger models? Since small models can achieve nearly 100 rewards on some tasks (as shown in Table 2), this may impact the significance of the benchmark.\\n5. In Table 2, the performance of GPT-4 prompting is significantly worse than that of the RL algorithms on the RL capability tasks, despite GPT-4 also being trained using RL methods. Can you comment on this?\\n6. The right side of Table 1 extends beyond the page margin, and some tables in the appendix exhibit the same issue.\", \"questions\": \"The questions are outlined in the weaknesses section above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 9eFK\", \"comment\": \"2. *\\u201cWhile it is said that human evaluators looked into the naturalness of the text, there is limited discussion of how consistent the simulated content would be with natural text. It's unclear how much variance there was across runs and different hyperparameters.\\u201d*\\n\\nRegarding the naturalness of the text from our simulators, we have evaluated the quality of our data by 1) inspecting the generations for signs of problems 2) performing a human evaluation where humans interact with the simulator model that generated the data. If humans are able to successfully interact with the models, it is a clear signal that our data is also natural and contains the desired properties. Additionally, we have found better performance from our algorithms compared with the BC models, signaling that the data the simulator was trained on is providing a useful signal for the improvement of RL algorithms. \\n\\nWe would like to clarify the methodology with which we generated our datasets to train our simulators, and how we ensured high quality and consistency for these datasets. As shown in Figure 2, we train a simulator that serves as an \\u201coracle\\u201d for the task, and hence does not require any capabilities of strategic reasoning, but provides signals to help the agent model learn. For example, the role of the oracle in the Twenty Questions task is to provide objective yes/no answers to questions about the object, and in Guess My City, to provide more open ended information about a query on the city. OpenAI\\u2019s GPT-3.5 has been shown to be able to generate reasonable questions and answers when used out of the box, which is why we leveraged it to collect our initial dataset. We have provided prompts that we use to generate the data to train our oracle models in our Appendix, and snippets below to show our thought process to maintain high accuracy. \\n\\nThe method for collecting the dataset is as follows. For each conversation, we select uniformly at random from the above list the word that the oracle is answering question about. The oracle is an LLM (OpenAI\\u2019s GPT3.5) given the following prompt. In our prompts, we denote variables that we fill in with variable data with {{variable}}.\", \"prompt\": \"You are a question answering oracle. You will answer each question about an object with Yes or No. If the answer could be both, answer with the most typical scenario. Here\\u2019s a few examples:\", \"example_1\": \"\", \"object\": \"Computer\", \"question\": \"Does the object use electricity?\", \"answer\": \"Yes.\", \"explanation_of_answer\": \"Computers need electricity to function. [...]\\n\\nAdditionally, we have also validated the data from trained oracle models through human evaluation. We have also provide generated examples by both oracle models and trained agents in our Appendix. With respect to the Car Dealer task, we spent a considerable effort to ensure diversity in the responses of sellers, by providing different desired brands, features, classifications (i.e. car or truck), and budgets. We have provided samples of conversation between the oracle model and MC returns vs. oracle and the BC model in the Appendix.\\n\\nWe would also like to clarify that the goal of our study is to show that humans believe that both the text generated by the models as well as from the simulators are fairly natural, and we found them to believe it to be natural more than 50% of the time.\"}", "{\"title\": \"Response to Reviewer WagK\", \"comment\": [\"3. \\\"What contribution and value addition does the present work make? It seems that already published papers cover the paper's goals.\", \"The goal of our paper is to present a benchmark that applies RL algorithms to multi-turn tasks, specifically to perform goal-directed dialogue. To this end, we provide (1) online simulators and offline datasets for a suite of 7 text-based strategy games and dialogue tasks (2) methodology to create simulators for offline evaluation, online RL training, and computing rewards (3) a research framework and toolkit for researchers and practitioners to get started with multi-turn RL for LLMs (focusing on both online & offline RL), which includes implementations of PPO, ILQL, and several baseline methods.\", \"In order to clarify the contribution of the paper, we provide an extensive comparison to related works in text games, interactive dialog tasks and offline RL. Below for each related work, we have described how LMRL-Gym is different. For our final revision, we will summarize these points in our related works section, but for clarity, we have included the full discussion.\", \"**Related Work on Text-Games**\", \"[1] Chevalier-Boisvert, M., Bahdanau, D., Lahlou, S., Willems, L., Saharia, C., Nguyen, T. H., & Bengio, Y. (2018). Babyai: A platform to study the sample efficiency of grounded language learning. arXiv preprint arXiv:1810.08272.\", \"BabyAI involves a creating sentences in the Backus-Nour-Form Grammar\", \"It is not a text-based representation, and instead a state is passed as a vector\", \"RL is trained on the state\", \"It is unclear how to use LLMs to solve this task as the representation of the state is not language based\", \"This task cannot be easily used to evaluate RL/LLM tasks\", \"[2] Gontier, N., Rodriguez, P., Laradji, I., Vazquez, D., & Pal, C. (2023). Language Decision Transformers with Exponential Tilt for Interactive Text Environments. arXiv preprint arXiv:2302.05507.\", \"Uses DYNA style data generation for the tasks\", \"Results indicate they may not have collected enough data for offline RL algorithms, as offline RL performs poorly [17, 18,19,20]\", \"They do not provide their dataset\", \"[3] Hausknecht, Matthew, et al. \\\"Interactive fiction games: A colossal adventure.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 05. 2020.\", \"Introduces the Jericho Benchmark\", \"Our smallest task includes a dataset of 1.25k trajectories. This dataset contains 590 trajectories. A large, diverse dataset is critical for testing offline RL [17, 18]\", \"Our benchmark is not only text-games and using templates for interaction, we utilize free-form text generation and simulate human-AI interaction\", \"[4] Shridhar, M., Yuan, X., C\\u00f4t\\u00e9, M. A., Bisk, Y., Trischler, A., & Hausknecht, M. (2020). Alfworld: Aligning text and embodied environments for interactive learning. arXiv preprint arXiv:2010.03768.\", \"The work is similar to the TextWorld benchmark, but LMRL-Gym benchmark is a lot more than Text-Nav, and this is our simplest task mainly meant to test implementation and correctness (e.g. \\u201cunit test\\u201d)\", \"LMRL-Gym has other text-games and dialogue tasks that are more complex and test a variety of RL Capabilities such as:\", \"Credit assignment: learning to assign credit to good strategy rather than lucky starting point\", \"Trajectory Stitching: ability of agent to use successful techniques from different trajectories in the dataset\", \"Partial Observability: the true state of the world is not completely represented in text\", \"Complex language: free-form generation and stochastic descriptions and processes for environment actions and text generation\", \"Credit assignment and trajectory stitching are important capabilities for testing offline RL as discussed in [17, 18]\", \"[5] Wang, R., Jansen, P., C\\u00f4t\\u00e9, M. A., & Ammanabrolu, P. (2022). Scienceworld: Is your agent smarter than a 5th grader?. arXiv preprint arXiv:2203.07540.\", \"This paper has 211,092 training examples for the behavior cloning\", \"They have successfully used online and offline RL\", \"Focused on completing tasks related to scientific reasoning\", \"No focus on interactive communication with humans/more stochastic environments, or partial observability as LMRL-Gym\", \"[6] Yao, S., Chen, H., Yang, J., & Narasimhan, K. (2022). Webshop: Towards scalable real-world web interaction with grounded language agents. Advances in Neural Information Processing Systems, 35, 20744-20757.\", \"Otherwise allows for free-form text generation, though formulated to interact with a website using \\u201csearch[big red box]\\u201d, and has rewards\", \"In this task the agent must navigate amazon.com to buy a product according to the user\\u2019s query\", \"Can be used to evaluate online RL, not offline (lacking a dataset)\", \"In LMRL-Gym we put a great deal of effort in generating data in such a way that it tested the RL Capabilities of trajectory stitching and credit assignment\", \"Where LMRL-Gym does better: 1) longer interactions 2) simulate conversations with humans\"]}", "{\"title\": \"Response to Reviewer WagK\", \"comment\": \"10. *\\\"L028 - L031:Question: How does the Benchmark cover tasks in open-ended dialogue and text games is not described in the paper? There can be many notions of open-endedness be it in dialogue or Reinforcement Learning. Please specify with examples what form of open-endedness has been discussed here.\\\"*\\n\\nWe refer to open-ended dialogue loosely in the sense of dialog that is conversational and not constrained by predefined or rigidly structured responses, such as the dialog in Guess My City and the Car Dealer task. As this wording is confusing for the reviewer, we will modify this sentence. We thank the reviewer for bringing this to our attention. \\n\\n11. *\\\"L047 - L049: Question: If existing research papers have already covered multi-turn dialogues, complex tool use and multi-step games, then what is the contribution of this paper by using RL algorithms for multi-turn dialogues, multi-step games? Please share any research or engineering benefits?\\\"*\\n\\nThe main contribution of this paper is providing algorithms primarily for offline RL. Please refer to our response to Question 3. \\n\\n12. *\\\"What are the other interactive applications that has been mentioned in the above L047-L049 lines?\\\"*\\n\\nOther interactive applications include simulating characters in long-horizon interactions (i.e. for interactive storytelling), customer support tasks, and as a tool for educational settings. \\n \\n13. *\\\"How important are RL capability tests for multi-turn RL? How are challenges of RL (generalizability, sample complexity etc) affecting the LLM interaction?\\\"*\\n\\nWe would like to clarify that each of the tasks in the benchmarks serve a different purpose. The RL Capability tests are ideal testbeds for testing multi-turn RL properties in language, because they are text-based versions of tasks where RL is known to excel. We design each of the tasks with specific properties and comparisons in mind. For example, for the Maze and Text-Nav we test both partially observed and fully observed versions to highlight the impact of partial observability. In addition, the Text-Nav task is very similar to the Maze task, but places more emphasis on realistic text. Some tasks (Guess My City, Car Dealer) aim to evaluate tasks with realistic natural language. Some tasks aim to test specific RL properties without the complexities of realistic language, while others focus on complex language. Algorithms developers would be expected to evaluate their methods on the totality of all the tasks and we discussed this in Section 4.3. To address your comment as to whether language is increasing the performance relative to exclusively symbolic approaches, we have provided a symbolic version of the Maze task for better comparison with the text based version, to further motivate the importance of text-based language games in building RL algorithms for LLMs. We found that simple online and offline Q-learning was able to get an optimal score on the maze. Therefore, performance symbolic maze is comparable to the fully observed Maze task. However, on the partially observed Maze task, the language based methods perform significantly worse. This highlights room for improvement on dealing with partial observability in RL with language. We have included these details in the Appendix, Section G.\\n\\n14. *\\\"Policy Gradients in RL algorithms can be unstable for which different seeds have to be selected. What seeds were selected for the policy-gradient algorithms supported in this work like PPO and other algorithms?\\\"*\\n\\nYes, we did find PPO to be quite unstable, and used several techniques to overcome instabilities in PPO training. The techniques that we have attempted are 1) to incorporate BC loss into the objective, 2) increase the number of rollouts used 3) increase the KL coefficient. These were able to stabilize the instabilities for Wordle, but not for Maze.\"}", "{\"title\": \"Lack of Author response to this review & to Ethics Flag\", \"comment\": \"I have not heard back from the authors regarding the weaknesses in this paper. I have not heard back regarding the ethics flag I raised.\\n\\nIf this does not get addressed, I think I should reduce my rating of the paper to reflect that I do not think it should be published as is for infosec reasons. I will do so now (changing from 8, to 3) unless I see the edits I find important.\"}", "{\"summary\": \"1. The paper introduces the LMRL-Gym benchmark for evaluating multi-turn Reinforcement Learning (RL) for Large Language Models (LLMs).\\n2. The benchmark consists of 3 Interactive Dialogue tasks and 5 RL Capability tests which require multiple rounds of language interaction.\\n3. A research toolkit for practitioners has been provided to get started with multi-turn RL for LLMs with offline value-based and online policy-based RL methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"A benchmark LMRL-Gym highlighting the importance of multi-turn RL for LLMs has been proposed in the paper. Evaluating multi-turn RL is important for LLMs, and offers future introspection whether RL can generalize for LLMs.\\n\\nA research toolkit has been proposed for multi-turn RL for LLMs with offline value-based and online policy-based RL. This can be useful to practitioners in the field as an engineering guide.\", \"weaknesses\": \"Lines numbers have been abbreviated as L# in the points below e.g. L100 means Line 100. Observations have been given quoting paper lines. Some observations are general where no line numbers have been quoted.\\n\\n1. L074-L075 Multi-turn reinforcement learning (RL) (Sutton&Barto,2018) in principle offers a path to enable LLMs to do just that.\", \"observation\": \"Correction of the phrase \\u2018is enable\\u2019 to \\u2018is to enable\\u2019 should be done\", \"questions\": \"Lines numbers have been abbreviated as L# in the points below e.g. L028 means Line 28. Questions have been given quoting paper lines. Some observations are general where no line numbers have been quoted.\\n\\n1. L 028 L031 Our benchmark consists of 3 Interactive Dialogue tasks and 5 RL Capability tests for a total of 8 tasks, which require multiple rounds of language interaction and cover tasks in open-ended dialogue and text games.\", \"question\": \"If existing research papers have already covered multi-turn dialogues, complex tool use and multi-step games, then\\n\\nWhat is the contribution of this paper by using RL algorithms for multi-turn dialogues, multi-step games? Please share any research or engineering benefits?\\n \\n3. Question: What are the other interactive applications that has been mentioned in the above L047-L049 lines?\\n\\n4. Question: How important are RL capability tests for multi-turn RL? How are challenges of RL (generalizability, sample complexity etc) affecting the LLM interaction?\\n\\n5. Question: Policy Gradients in RL algorithms can be unstable for which different seeds have to be selected. What seeds were selected for the policy-gradient algorithms supported in this work like PPO and other algorithms?\", \"there_can_be_many_notions_of_open_endedness_be_it_in_dialogue_https\": \"//begrijpelijkeformulieren.org/sites/begrijpelijkeformulieren/files/Reja_e.a._Open-ended_vs._Close-ended_Questions_in_Web.pdf or Reinforcement Learning https://proceedings.mlr.press/v119/wang20l.html\\n\\nPlease specify with examples what form of open-endedness has been discussed here.\\n\\n2. L 047 L049 This challenge is apparent in solving temporally extended tasks, such as multi-turn dialogue (Irvine et al.,2023;,FAIR), complex tool use (Wang et al.,2022a), multi-step games (Hendrycksetal.,2021b), and other interactive applications.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper presents a new benchmark to evaluate LLM agents in a dialogue setting. An agent interacts with an LLM (a proxy for a human) to engage in a dialogue to solve an RL task.\", \"strengths\": \"This is an important and interesting task, and one that is surprisingly overlooked in LLM benchmarks. The more common tasks is for the LLM to generate a single response and get reward for it, or to take symbolic actions in multi-turn setting. However, reviewers raised several concerns regarding the benchmark:\", \"weakness\": \"1. Tasks are somewhat simplistic; not all are natural dialogue tasks. This was noted by the reviewer j29V and 9eFK. I personally found car dealer as an example of a good task and it would have been great to have more real-world tasks like it (e.g., hotel recommendation, flight booking).\\n\\n2. Evaluations are restricted to GPT2 models. I understand that only a few labs can train GPT-4 or even 70B models, but GPT-2 is at this point quite outdated. Even a 2B or 3B model would have been nice.\\n\\nOverall, I like this direction but I think this needs more work. At a minimum, either more real-world tasks or experiments with bigger models would be needed. Alternatively, authors can focus more on the human evaluation of LLM agents. For now, I am recommending a weak reject, but I wouldn't mind if the paper was accepted.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers focused on\\n\\n1. Lack of experiments with models bigger than GPT2 (reviewer 9eFK and reviewer KbFz raised this)\\n2. Tasks being somewhat simplistic or not requiring language understanding (reviewer 9eFK and j29V)\\n3. Ethical concerns about whether the benchmark can be used improperly\\n4. Whether human study was properly done\\n\\n(1) and (2) are indeed concerning. Regarding (3), I am unsure. I think most papers can benefit from ethics review so I am not against it but I think this paper is not using a realistic dataset and is not different from the vast majority of works in the LLM benchmark space. The emphasis on persuasion does put a bit different angle, so perhaps an ethics review can potentially help. Finally, the authors ran self-consistency experiments to validate the benchmark for (4) and the results look good to me so this issue was resolved.\"}", "{\"title\": \"Response to Reviewer j29V\", \"comment\": \"2. *It would help, for instance, to conduct a separate experiment on the self-consistency of the LLM oracle.\\\"*\\n\\nAs per your request, we have conducted a separate experiment on the self-consistency of the LLM oracle by taking the same sample of conversations used in our human evaluation in Appendix A and prompting an LLM (specifically gpt-4o-mini) as to whether the oracle\\u2019s answers to questions are consistent with the object they have in mind. Our process for prompting is as follows: For Twenty Questions & Guess My City, we take every single question in the dialog task and ask whether the oracle's answer to each question or statement is consistent with the object or city. For the Car Dealer task, we ask whether the statement the buyer is making is an appropriate response to the seller's statement or query. We ask the model to rate on a Likert scale (i.e. assigning a rating from 1 to 5, where 1 represents the least consistent and 5 represents the most consistent), and ask it to provide an analysis as to why it is has provided this rating (to make sure the rating is valid and not a random number). Then, we take the average of all the ratings for each question for each task. For the tasks, we received the following ratings:\\n\\n**Twenty Questions** - mean: 4.9, std: 0.05\\n\\n**Guess My City** - mean: 4.35, std: 0.318\\n\\n**Car Dealer** - mean: 4.9, std: 0.024\", \"here_are_a_sample_set_of_instructions_for_the_llm_for_guess_my_city_and_a_sample_answer\": \"***************************************************************************************************\\n**Prompt for Guess City**:\\nThis is a conversation of two players (agent and environment) playing a game where the goal is for one player (agent) to guess the city the other player (environment) has in mind. The agent can ask both yes/no and open-ended questions, and must guess within 20 questions. Could you tell me if the answers of the environment are consistent with the city Madrid? Please provide a rating from 1 to 5, where 1 represents the least consistent with the city and 5 represents the most consistent with the city. Provide a rating for each statement by the environment with an analysis of why.\", \"agent\": \"Is your city located in the region of Madrid?\", \"environment\": \"Yes, it is located in Spain.\", \"analysis\": \"Madrid is the capital of Spain, making this answer correct.\", \"rating\": \"5/5\\n\\nWe do the same process for Guess My City and Car Dealer, modifying the prompts accordingly to fit the roles in dialog. We have found with this experiment that an LLM believes the responses from the oracle models to be consistent with the responses from the Agent, which matches with our other consistency checks. As per the reviewers suggestion, we will include this experiment in the paper.\"}", "{\"title\": \"Response to Reviewer 9eFK\", \"comment\": \"3. *\\\"The current tasks don't require very complex reasoning or long-term memory. It's unclear whether the benchmark may become saturated by larger models who already are often used multi-turn. It could be interesting to look into whether language is increasing the performance relative to exclusively symbolic approaches.\\\"*\\n\\nWe would like to clarify that each of the tasks in the benchmarks serve a different purpose. The RL Capability tests are ideal testbeds for testing multi-turn RL properties in language, because they are text-based versions of tasks where RL is known to excel. We design each of the tasks with specific properties and comparisons in mind. For example, for the Maze and Text-Nav we test both partially observed and fully observed versions to highlight the impact of partial observability. In addition, the Text-Nav task is very similar to the Maze task, but places more emphasis on realistic text. Some tasks (Guess My City, Car Dealer) aim to evaluate tasks with realistic natural language. Some tasks aim to test specific RL properties without the complexities of realistic language, while others focus on complex language. Algorithms developers would be expected to evaluate their methods on the totality of all the tasks and we discussed this in Section 4.3. To address your comment as to whether language is increasing the performance relative to exclusively symbolic approaches, we have provided a symbolic version of the Maze task for better comparison with the text based version, to further motivate the importance of text-based language games in building RL algorithms for LLMs. We found that simple online and offline Q-learning was able to get an optimal score on the maze. Therefore, performance symbolic maze is comparable to the fully observed Maze task. However, on the partially observed Maze task, the language based methods perform significantly worse. This highlights room for improvement on dealing with partial observability in RL with language. We have included these details in the Appendix, Section G.\\n\\n4. *\\\"How well would these results generalize to larger language models?\\\"*\\n\\nWe thank the reviewer for this question. Although we have used smaller models in our experiments, we would like to note that there have been recent papers that have leveraged our Twenty Questions, Guess My City, and Car Dealer tasks and have trained 3 - 7 billion parameter scale models with the tasks. There have also been works that have used our repository and algorithms to train 7 billion parameter scale models, showing the scalability of our results and contribution.\"}", "{\"summary\": \"The authors introduce a novel benchmark called LMRL-Gym to evaluate multi-turn RL capabilities through 8 tasks. The tasks include 3 Interactive Dialogue Tasks (ex. persuading a user to buy a car) and 5 RL Capability tasks (ex. navigating a maze). The paper evaluates a series of online and offline methods across these tasks. On many of the RL tasks, Implicit Language Q-Learning (ILQL) performed best including 99.9 on one of the maze tasks. However, on the Interactive Dialogue tasks, simpler methods such as Monte Carlo Returns achieved a higher score than ILQL. This suggests that perhaps these TD-learning approaches may to scale poorly to more complex textual tasks. While the GPT-4 few-shot baseline performed well on Interactive Dialogue Tasks, it struggled with game tasks like Chess or Endgames. PPO had strong performance on some tasks, but showed training instabilities. Interestingly, different RL methods did well on different tasks, leaving open potential for further research to optimise for both linguistically and strategically complex tasks. The majority of experiments were conducted on GPT-2 variants for benchmark accessibility to researchers will small compute budgets. When generating synthetic data for the dialogue tasks, the authors used GPT-3.5 and validated the naturalness of data with human evaluation. This work overall contributes a benchmark and research framework with which to develop better RL algorithms for LLMs.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Originality: The paper presents one of the first published benchmarks for evaluating multi-turn RL methods. While it's likely frontier labs have such data internally and chosen not to publish it, this is the first paper I've seen making these types of results and code public.\", \"quality\": \"The paper uses a GPT-4 few shot baseline which provides a strong comparison against several other implemented baseline methods (PPO, ILQL, MC Returns, etc). The authors do a laudable job of using ablation studies to validate their use of LLM simulators which could be exploitable. In general, the authors tend to substantiate their claims thoroughly and explain potential weaknesses transparently.\", \"clarity\": \"The writing is clear and straight forward with illustrative figures and an extensive appendix.\", \"significance\": \"This benchmark and task-set addresses a current gap in publicly available benchmarks for multi-turn RL. This could be useful towards benchmarking novel RL methods and informing future research directions to optimise for both textual and strategic/planning performance. However, there is also a risk of this work being used to fine-tune more agentic, persuasive and thus potentially dangerous systems.\", \"weaknesses\": \"1. Scaling of Results\", \"this_one_might_be_hard_to_fix_without_having_computational_budget\": \"however one weakness of the paper is that the majority of the experiments are conducted on GPT-2 variants, leaving it unclear how these results may scale to larger models. For instance, it would be quite interesting to see whether the same findings regarding offline and online method differences in textual and strategic task performance remain when considering multimodal models or larger models with longer context windows.\\n\\n2. Failure Analysis\\nIt would be interesting to see a few more examples (qualitative would be fine) of some of the observed failure modes, and some further analysis on where and why specific methods fail. The current results regarding online and offline are quite interesting and it'd be helpful for future work to understand more what might be causing this.\\n\\n3. Capabilities Coverage of Tasks\\nThe current tasks don't require very complex reasoning or long-term memory. It's unclear whether the benchmark may become saturated by larger models who already are often used multi-turn. It could be interesting to look into whether language is increasing the performance relative to exclusively symbolic approaches. \\n\\n4. Evaluation Methods\\nWhile it is said that human evaluators looked into the naturalness of the text, there is limited discussion of how consistent the simulated content would be with natural text. It's unclear how much variance there was across runs and different hyperparameters.\\n\\nThe paper is already quite extensive and the authors do acknowledge some of these limitations.\", \"questions\": \"How well would these results generalize to larger language models?\\n\\nIn the impact statement, you mention dual-use implications of your work including \\\"persuasion, manipulation and addictive engagement of users at a large scale\\\". You then mention that you \\\"have designed [y]our datasets and reward functions such that [they] prioritize fairness and human-aligned outcomes\\\". Can you please elaborate which design steps you have taken to achieve this prioritization? \\n\\nYou also express the intent to make public the code, datasets, hyperparameters as well as training procedure. What makes you confident that you sufficiently mitigate the stated implications and risks such that it is safe to publicly release the benchmark and open-source framework, instead of pursuing a different path such as making these more sensitive aspects of the results available only to trusted audiences (ie. known and trusted researchers, US AI Safety Institute, etc)?\", \"flag_for_ethics_review\": \"['Yes, Potentially harmful insights, methodologies and applications']\", \"details_of_ethics_concerns\": \"While I think this paper presents scientifically valuable work, I am concerned that publishing it's results (specifically the code, dataset, hyperparameters and training procedure) without any further oversight may be on net harmful and I recommend against open-sourcing these components of this paper. I propose the authors rework this publication to not open-source the framework to the general public, and instead with a more limited set of actors (an example list is given below). I also prepose the authors rework their impact statement to more accurately reflect the negative effects of making public this work.\", \"more_detail\": \"This paper targets arguably the top three most harmful capabilities that AI Safety researchers warn about (ie. long-horizon-reasoning, agentic goal-seeking, persuasion of human targets). While benchmarks are helpful towards measuring how dangerous models are along this axis (as for example the US and UK governments may soon want to do), if they are made public -- these benchmarks can be used as training and fine-tuning inputs to specifically achieve these capabilities faster. Persuasion capabilities post a particular concern as they can contribute to loopholes in typical containment proposals (within the context of securing models, for example from autonomously replicating).\", \"effects\": \"Realistically all the top labs like OAI, DeepMind, Anthropic likely already have such internal benchmarks and are using them. However, these large groups have made voluntary responsible scaling commitments and will likely be subject to government oversight. Publishing LMRL-Gym for unrestricted access to the general public presents a larger challenge, given small actors who are far harder to oversee may use it for nefarious purposes, such as fine-tuning LLMs for persuasion and using these for phishing attacks, manipulative sales practices, etc. At the more concerning end, more agentic and long-term reasoning open source models may present larger threats as they improve their abilities to access resources and complete tasks autonomously. This can be highly dangerous given the nascent state of model control techniques and AI safety.\\n\\nI propose the above listed sensitive parts of this work should be made accessible to trusted parties only who will use it for positive aims. For example, I recommend sharing with the US AI Safety Institute, and the UK AI Safety Institute, the Department of Commerce, and perhaps in a limited capacity with adequate oversight to known and trusted researchers.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We hope these responses have helped answer the reviewers questions and that you consider raising your score :)\"}" ] }
EdAgWYEFc1
Retrieval-Reasoning Large Language Model-based Synthetic Clinical Trial Generation
[ "Zerui Xu", "Fang Wu", "Tianfan Fu", "Yue Zhao" ]
Machine learning (ML) has exhibited considerable promise in the clinical domain. However, its capabilities are constrained by data scarcity and ethical considerations, as the generation of clinical trials presents significant challenges due to stringent privacy regulations, high costs, and the extended duration required required for conducting studies with human participants. Despite the advancements of large language models (LLMs) in natural language understanding and generation, limited research has explored their potential in facilitating the generation of synthetic clinical trials. To address this gap, we introduce a novel Retrieval-Reasoning few-shot framework that leverages LLMs to generate artificial yet realistic and diverse clinical trials with binary success/failure labels. Extensive experiments conducted on real clinical trials from the ClinicalTrials.gov database demonstrate that our generated synthetic data can effectively augment real datasets. Furthermore, by fine-tuning a pre-trained model as a binary classifier on synthetic clinical trial datasets, we demonstrate that this augmentation enhances model training for downstream tasks such as trial outcome prediction. Our findings suggest that leveraging LLMs for synthetic clinical trial generation holds significant promise for accelerating clinical research, enabling more effective ML models in healthcare, and upholding ethical standards for patient privacy.
[ "Synthetic Data Generation", "Large Language Model", "Clinical NLP" ]
https://openreview.net/pdf?id=EdAgWYEFc1
https://openreview.net/forum?id=EdAgWYEFc1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "DBHOS4wQFW" ], "note_type": [ "comment" ], "note_created": [ 1728295380816 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13018/Authors" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
EcrdmRT99M
The Effectiveness of Curvature-Based Rewiring and the Role of Hyperparameters in GNNs Revisited
[ "Floriano Tori", "Vincent Holst", "Vincent Ginis" ]
Message passing is the dominant paradigm in Graph Neural Networks (GNNs). The efficiency of message passing, however, can be limited by the topology of the graph. This happens when information is lost during propagation due to being oversquashed when travelling through bottlenecks. To remedy this, recent efforts have focused on graph rewiring techniques, which disconnect the input graph originating from the data and the computational graph, on which message passing is performed. A prominent approach for this is to use discrete graph curvature measures, of which several variants have been proposed, to identify and rewire around bottlenecks, facilitating information propagation. While oversquashing has been demonstrated in synthetic datasets, in this work we reevaluate the performance gains that curvature-based rewiring brings to real-world datasets. We show that in these datasets, edges selected during the rewiring process are not in line with theoretical criteria identifying bottlenecks. This implies they do not necessarily oversquash information during message passing. Subsequently, we demonstrate that SOTA accuracies on these datasets are outliers originating from sweeps of hyperparameters—both the ones for training and dedicated ones related to the rewiring algorithm—instead of consistent performance gains. In conclusion, our analysis nuances the effectiveness of curvature-based rewiring in real-world datasets and brings a new perspective on the methods to evaluate GNN accuracy improvements.
[ "Geometric deep learning", "Graph Neural Networks", "Graph Rewiring", "Curvature" ]
Accept (Poster)
https://openreview.net/pdf?id=EcrdmRT99M
https://openreview.net/forum?id=EcrdmRT99M
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yN5K5zP93X", "uNn5PkDZnB", "jHUmwME3p8", "dgLzOD19Rk", "d8AXbExZ21", "WgYbvi28cE", "ToTshR2ltL", "Tc7Wqw4qmS", "TFId7EDggf", "SJDHQ8DYBK", "Oi5lpKD0Vv", "OhT1effUY4", "Kw701IOaR9", "CbxxVjZ80D", "CU9fDAB9IO", "CHbW2tikIB", "C4Q1iZxBLM", "C4A1Sg7JPN", "99EiMIlmWX", "97mrWgUVSA", "8XZgazFc4u", "7yBYR8SDti", "5UjMpk4wap", "4nbW2GF7Fy", "2CL2yVynFd" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732537893486, 1732185011010, 1730735357779, 1732184357629, 1733175289210, 1732706603244, 1732184700579, 1730064872951, 1732612276443, 1732183559268, 1734739852452, 1732438018029, 1730306821206, 1733164891189, 1737524034711, 1732964609035, 1732611256484, 1730670011972, 1732184961757, 1732184381294, 1732183544018, 1732611226374, 1732185237236, 1732184667798, 1732509036480 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10226/Authors" ], [ "ICLR.cc/2025/Conference/Submission10226/Authors" ], [ "ICLR.cc/2025/Conference/Submission10226/Reviewer_VNfT" ], [ "ICLR.cc/2025/Conference/Submission10226/Authors" ], [ "ICLR.cc/2025/Conference/Submission10226/Authors" ], [ "ICLR.cc/2025/Conference/Submission10226/Reviewer_VNfT" ], [ "ICLR.cc/2025/Conference/Submission10226/Authors" ], [ "ICLR.cc/2025/Conference/Submission10226/Reviewer_GwmN" ], [ "ICLR.cc/2025/Conference/Submission10226/Authors" ], [ "ICLR.cc/2025/Conference/Submission10226/Authors" ], [ "ICLR.cc/2025/Conference/Submission10226/Area_Chair_RuUT" ], [ "ICLR.cc/2025/Conference/Submission10226/Reviewer_UNSN" ], [ "ICLR.cc/2025/Conference/Submission10226/Reviewer_UNSN" ], [ "ICLR.cc/2025/Conference/Submission10226/Reviewer_GwmN" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10226/Authors" ], [ "ICLR.cc/2025/Conference/Submission10226/Authors" ], [ "ICLR.cc/2025/Conference/Submission10226/Reviewer_dc59" ], [ "ICLR.cc/2025/Conference/Submission10226/Authors" ], [ "ICLR.cc/2025/Conference/Submission10226/Authors" ], [ "ICLR.cc/2025/Conference/Submission10226/Authors" ], [ "ICLR.cc/2025/Conference/Submission10226/Authors" ], [ "ICLR.cc/2025/Conference/Submission10226/Authors" ], [ "ICLR.cc/2025/Conference/Submission10226/Authors" ], [ "ICLR.cc/2025/Conference/Submission10226/Reviewer_dc59" ] ], "structured_content_str": [ "{\"comment\": \"We would like to thank the reviewer for their continued engagement with our work. We respectfully emphasize that our results represent a significant contribution to the field, as it challenges prevailing assumptions in the field of graph rewiring.\\nOur work addresses fundamental mismatches between theory and practice in curvature-based rewiring, providing insights that prevent reliance on misleading benchmarks and guide future method development. We firmly believe that scientific progress is not solely achieved by proposing new ideas and continually building upon the flawed foundation of previous work and that correcting incorrect results is equally, if not more, critical to ensuring the integrity and advancement of scientific knowledge.\\n\\nWe respectfully highlight that our study goes beyond benchmarking by providing important insights into the implementation of graph rewiring techniques and corrects underlying assumptions in the field of graph learning. We hope the reviewer considers this perspective when evaluating the broader impact of our contribution.\"}", "{\"comment\": \"**4. Could the authors explain why they focus on the theoretical results related to SDRF and not the ones related to BORF, another curvature-based method? Their theoretical results seem less restrictive than what\\u2019s presented in the SDRF paper, so I'm wondering what an analogue of Table 1 could look like in this context.**\\n\\nIn the present manuscript, we focus on discrete curvature notions that can be expressed via combinatorial equations and are thus easier to compute than the Ollivier (Ricci) Curvature used for BORF in [3] which involves solving an optimal transport problem. However, thanks to Theorem 2 from [1], we know that the Balanced Forman curvature is a lower bound for the Ollivier (Ricci) Curvature. Therefore, in lines 184-188, our initial submission already included a section in how our results could be applicable to BORF [3]: \\n\\n\\u201d*In [2], it is shown edges i \\u223c j with an Ollivier (Ricci) Curvature \\u03ba(i, j) close to the minimum value of \\u22122 cause oversquashing (Propostion 4.4 and Theorem 4.5 in [3]). From [1] we know that \\u03ba(i, j) \\u2265 BF_c(i, j), and through the distribution of BF_c in Appendix A we can see that most edges are far away from the lower limit of \\u22122 of BF_c (and therefore also \\u03ba(i, j)).\\u201d*\\n\\nIn other words, the BORF method is backed up by theoretical results that identify edges with Ollivier (Ricci) Curvature \\u03ba(i, j) close to the minimum value of \\u22122 as bottlenecks and the distributions in Appendix A reveal that this condition is rarely satisfied. Thus, a first proxy for an analogue of Table 1 could be constructed using these distributions.\\n\\n\\n--- \\n--- \\n\\n\\n[1] Jake Topping, Francesco Di Giovanni, Benjamin Paul Chamberlain, Xiaowen Dong, and Michael M. Bronstein. Understanding over-squashing and bottlenecks on graphs via curvature. In International Conference on Learning Representations, 2021\\n\\n[2] Dwivedi, V. P., Ramp\\u00e1\\u0161ek, L., Galkin, M., Parviz, A., Wolf, G., Luu, A. T., & Beaini, D. (2022). Long Range Graph Benchmark. Adv. Neural Inf. Process. Syst. Track Datasets Benchmarks.\\n\\n[3] Khang Nguyen, Nong Minh Hieu, Vinh Duc Nguyen, Nhat Ho, Stanley Osher, and Tan Minh Nguyen. Revisiting over-smoothing and over-squashing using ollivier-ricci curvature. In International Conference on Machine Learning, pages 25956\\u201325979. PMLR, 2023\"}", "{\"summary\": \"This paper investigates the effectiveness of curvature-based rewiring in mitigating bottlenecks in graph machine learning tasks. It argues that the theoretical conditions for edges being considered as bottlenecks are not necessarily satisfied for edges being modified in practice. It further argues that the superior performance of some existing methods is likely due to hyperparameter selection rather than systematic improvement.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) The paper takes a careful look at some of the curvature-based methods proposed in the literature and examines whether theoretical conditions match empirical practice. From this perspective, the paper represents a move towards the right direction in evaluation of graph machine learning methods.\\n\\n2) Detailed description of experimental setup provides helpful guideline for future research in terms of conducting rigorous empirical evaluation. The argument on hyperparameter selection is interesting and points to the importance of a probabilistic view in performance evaluation.\\n\\n3) The paper is clearly motivated and generally well written. The visualisations are helpful to aid understanding.\", \"weaknesses\": \"1) As discussed in the paper briefly, I don\\u2019t feel the datasets being tested are the most appropriate ones (see Questions below). This makes the findings less surprising and not entirely convincing (although in fairness this is probably a limitation of previous methods as well).\\n\\n2) Given that this is a paper on empirical validation, experiments should perhaps be done on more than one rewriting method and one single GNN model.\", \"questions\": \"1) It is unclear whether any dataset in Table 1 would possess bottlenecks that hinder (in particular long-range) interactions that might be necessary for the task (in some sense this is also a limitation of the experiments in Topping et al. 2022), which is one of the main reasons why curvature-based rewiring was proposed in the first place. Therefore the analyses presented in this paper are, albeit interesting and pointing towards the right direction, not entirely surprising. This has been briefly discussed in Section 5, but it might be helpful if the authors can conduct experiments on datasets are may possess long-range interactions, for example the ones described in Dwivedi et al. (https://arxiv.org/abs/2206.08164). Note that the suitability of these datasets are themselves under active debate (see https://arxiv.org/abs/2309.00367), nevertheless they might be more appropriate than the datasets chosen in the paper.\\n\\n2) The experiments are mostly based on a single rewiring technique and a single GNN model, i.e., the GCN. While this is reasonable starting point, for a more comprehensive evaluation and conclusive evidence, more rewiring methods and GNN models (e.g., GraphSage, ChebNet, or GIN) should be tested. I appreciate the former is recognised as a limitation of the current work, but it makes it less clear how generalisable the findings are.\\n\\n3) I don\\u2019t think homophily is the only factor that determines the long-rangeness of the task. It should depend on the graph topology (e.g., diameter), node features, and the nature of the task as well. Although this is not necessarily the focus of the paper, discussion about this can be made more precise.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to thank the reviewer for their time and relevant comments. We respond to them point-by-point below\\n\\n**1. It is unclear whether any dataset in Table 1 would possess bottlenecks that hinder (in particular long-range) interactions that might be necessary for the task (in some sense this is also a limitation of the experiments in Topping et al. 2022), which is one of the main reasons why curvature-based rewiring was proposed in the first place.**\\n\\nWe agree with the referee that additional robustness checks are valuable to increase the validity of our findings. Computational limitations restrained us from performing the full hyperparameters analysis for the \\u201cPascalVOC-SP\\u201d dataset from LRGB [2], however, we performed the theorem analysis for this dataset. We find that only 30% of selected edges during rewiring satisfy condition 2b, consistent with our previous findings. This highlights that even in these long-range datasets chosen edges to rewire are not necessarily responsible for oversquashing and (discrete) curvature-based rewiring methods will most likely not yield performance gains on this dataset.\\n\\n\\n| Dataset | Edges Rewired | Condition 2 | Condition 2b |\\n|:---------------------------|:-------------------------:|:---------------------:|:----------------------------:|\\n| PascalVOC-SP | 2 259 482 | 0 (0%) | 674 596 (29.86 %)\", \"concerning_the_datasets_used_in_our_work\": \"as the main goal of our work was to re-evaluate the performance of curvature based rewiring we indeed focused on the datasets presented in the original work [1], which were used to assess the validity of the rewiring algorithm. We, therefore, believe that these are then also warranted to re-evaluate its effectiveness and constitute the most transparent approach for this. Our experiments relating to edges satisfying the conditions from Theorem 4 [1] in section 3 then show that these datasets do not possess edges that necessarily oversquash information. Given the impact of the original work [1], we are confident that these results are substantial enough to be presented.\\n\\n **2. The experiments are mostly based on a single rewiring technique and a single GNN model, i.e., the GCN. While this is reasonable starting point, for a more comprehensive evaluation and conclusive evidence, more rewiring methods and GNN models (e.g., GraphSage, ChebNet, or GIN) should be tested. I appreciate the former is recognised as a limitation of the current work, but it makes it less clear how generalisable the findings are.**\", \"concerning_the_architecture_choice\": \"We have performed additional analyses to show that our results do not depend on the choice of GNN architecture (GCN) that was originally used in [1]. To keep the scope of our work aligned with our main message, we have performed, within computational constraints, experiments on three datasets (Texas, Cora and Chameleon) with two additional architectures: GAT [4] and GraphSAGE [5]. The selected datasets are representative for the three categories of node classification datasets used throughout the papers.\\n\\nThe results are in line with what we expected from previous runs. For GraphSAGE we see that none of the curvature notions consistently improve performances with respect to no rewiring, both on the distribution level as well as on the top 10%. For GAT, we confirm this observation for the Cora and Chameleon datasets. For Texas, we see that AFc-based rewiring does, on occasions, perform well, especially in the top 10% of runs. This is however linked to a large spread in performance as indicated by the standard deviation of the top 10% runs. However, as already mentioned, this performance gain cannot be found for Cora nor Chameleon. We have added these results in Appendix C of our work.\", \"concerning_the_rewiring_methods\": \"Our goal was to re-analyse the effectiveness of discrete curvature-based rewiring methods. One advantage of those measures, over spectral techniques, is that they can be linked directly to the message passing and local information bottlenecks, as demonstrated in Theorem 4 in ref. [1] instead of relying on global measures of the graph that might not reflect the local bottlenecks accurately. We used SDRF as a rewiring method as it allows us to study the effect of rewiring certain edge, independent of other confounding elements (the identification and rewiring of edges is purely done based on curvature). According to Theorem 2 from [1], we know that the Balanced Forman curvature is a lower bound for the Ollivier (Ricci) Curvature. Therefore, in lines 184-188, our initial submission already included a section in how our results could apply to other curvature-based rewiring techniques such as BORF [3].\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your response. We appreciate the review\\u2019s praise and thoughtful suggestions, which we worked diligently to address in full and in detail. Understanding that moving from 6 to 8 is a significant step, we believe the thorough work we\\u2019ve done and the absence of remaining criticism warrant reconsideration, and we respectfully ask for it as fellow scientists.\\n\\nKind regards,\\n\\nThe authors\"}", "{\"comment\": \"I thank the authors for their effort in addressing the points raised in my review. With these further results and discussions I believe the paper is more complete. As I already mentioned in the original review, I believe the attempt made by the authors in this work represents a right direction towards evaluation of graph ML models. I will be raising my rating to 6.\"}", "{\"comment\": \"**3. The analysis is only on GCN (Kipf & Welling, 2016). It will be more comprehensive to include other widely used MPNNs, e.g., GraphSAGE (Hamilton et al., 2017), GatedGCN (Bresson & Laurent, 2018), GAT (Veli\\u010dkovi\\u0107 et al., 2018). This can help better understand the impact of MPNNs on the performance of graph rewiring techniques.**\\n\\nWe have performed additional analyses to show that our results do not depend on the choice of MPNN architecture (GCN) that was originally used in [1]. To keep the scope of our work aligned with our main message, we have performed, within computational constraints, experiments on three datasets (Texas, Cora and Chameleon) with two additional architectures: GAT [4] and GraphSAGE [5]. The selected datasets are representative for the three categories of node classification datasets used throughout the papers.\\nThe results are iIn line with what we expected from previous runs. For GraphSAGE, we see that none of the curvature notions consistently improve performances with respect to no rewiring, both on the distribution level as well as on the top 10%. For GAT, we confirm this observation for the Cora and Chameleon datasets. For Texas, we see that AFc-based rewiring does, on occasions, perform well, especially in the top 10% of runs. This is however linked to a large spread in performance as indicated by the standard deviation of the top 10% runs. However, as already mentioned, this performance gain cannot be found for Cora nor Chameleon. We have added these results in Appendix C of our work.\\n\\n\\n--- \\n--- \\n\\n\\n[1] Topping, J., Di Giovanni, F., Chamberlain, B. P., Dong, X. & Bronstein, M. M. Understanding over-squashing and bottlenecks on graphs via curvature in International Conference on Learning Representations (2021). 1\\u20133, 7\\u20139\\n\\n[2] Khang Nguyen, Nong Minh Hieu, Vinh Duc Nguyen, Nhat Ho, Stanley Osher, and Tan Minh Nguyen. Revisiting over-smoothing and over-squashing using ollivier-ricci curvature. In International Conference on Machine Learning, pages 25956\\u201325979. PMLR, 202\\n\\n[3] Dwivedi, V. P., Ramp\\u00e1\\u0161ek, L., Galkin, M., Parviz, A., Wolf, G., Luu, A. T., & Beaini, D. (2022). Long Range Graph Benchmark. Adv. Neural Inf. Process. Syst. Track Datasets Benchmarks.\\n\\n[4] Veli\\u010dkovi\\u0107, P., Cucurull, G., Casanova, A., Romero, A., Li\\u00f2, P., & Bengio, Y. (2018). Graph Attention Networks. Proc. Int. Conf. Learn. Represent\\n\\n[5] Hamilton, W., Ying, Z., & Leskovec, J. (2017). Inductive Representation Learning on Large Graphs. Adv. Neural Inf. Process. Syst., 30.\"}", "{\"summary\": \"The paper revisits the effectiveness of curvature-based rewiring in Graph Neural Networks, focusing on its role in alleviating the over-squashing problem. In GNNs, message passing can suffer from information bottlenecks, where messages get compressed, leading to worse downstream performance. Curvature-based rewiring, which involves modifying the graph\\u2019s structure to improve information flow, has been proposed as a solution. This paper reevaluates its performance on real-world datasets.\\n\\nThe authors find that in real-world scenarios, the edges selected during the rewiring process do not align with theoretical predictions about bottlenecks, suggesting that the over-squashing issue may not be as prevalent in these datasets. Furthermore, the paper argues that state-of-the-art results from curvature-based rewiring often stem from hyperparameter tuning rather than consistent performance improvements. The study questions the practical benefits of curvature-based rewiring for GNNs and calls for a more nuanced evaluation of GNN improvements.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Clear motivation: The paper has a clear motivation to revisit and critically evaluate the effectiveness of curvature-based rewiring in Graph Neural Networks, and to specifically test whether the theoretical justifications for these methods are genuinely applicable to real-world datasets.\", \"Extensive experiments: The authors conduct a thorough experimental analysis, testing various curvature measures and examining their effects on node and graph classification tasks. They scrutinize both the theoretical underpinnings and the practical outcomes of rewiring, demonstrating that the edges selected by the rewiring process do not always correspond to bottleneck points. Additionally, they show that state-of-the-art performance is often a result of hyperparameter tuning rather than inherent benefits from rewiring.\"], \"weaknesses\": [\"Presentation: pages 8 and 9 could benefit from some reorganization, for example by better integrating table 2 and figure 2 into the text. Table 2 could also benefit from the best-performing setting being highlighted. I also find Figure 3 hard to read: perhaps it would be better to split the figure in two, i.e. have one figure with the curvature distributions and one with the mean test accuracies.\", \"More nuanced discussion: while this may be a minor point, I would suggest that the authors include a sentence or two about the role of curvature in graph machine learning more broadly in their discussion section, for example by referring to [1]. While I welcome that the paper pushes back against curvature-based rewiring and consider it good science, this does not mean that curvature is generally not useful for GNNs.\", \"Long-range datasets: again a minor point, but additional graph-level datasets would further strengthen the paper's message. The authors could, for example, look at Peptides-func and Peptides-struct in the LRGB datasets [2].\", \"[1] Fesser, Lukas, and Melanie Weber. \\\"Effective Structural Encodings via Local Curvature Profiles.\\\" The Twelfth International Conference on Learning Representations.\", \"[2] Dwivedi, Vijay Prakash, et al. \\\"Long range graph benchmark.\\\" Advances in Neural Information Processing Systems 35 (2022): 22326-22340.\"], \"questions\": \"Could the authors explain why they focus on the theoretical results related to SDRF and not the ones related to BORF, another curvature-based method? Their theoretical results seem less restrictive than what\\u2019s presented in the SDRF paper, so I'm wondering what an analogue of Table 1 could look like in this context.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We appreciate the Reviewer's time and effort in engaging with us and providing valuable, constructive feedback and we are grateful to the Reviewer for improving our scores.\\n\\nAll the discussed content will be incorporated into the revised manuscript.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"**3. The baseline of the experiment needs to be increased. In the task of node classification, the importance of node characteristics is very important. Therefore, the authors need to add an MLP as a baseline. If you can't explain the effectiveness of the edge in this task, then you can't fully explain the problem of rewiring.**\\n\\nOur study aims to investigate how rewiring impacts the performance of Graph Neural Networks. Since MLPs inherently do not use the graph topology as information for predictions, we do not believe that such a baseline would prove useful in this context. Comparing rewiring within GNNs allows us to \\u201cisolate\\u201d the effect of rewiring, while comparing against MLPs would fall within a larger evaluation of Graph Neural Networks in general, whose representative power has already been the subject of study [5]. \\n \\nAdditionally, we note that standard GNN architectures such as GCNs [6] or GATs [7] are, in their respective works, benchmarked against relevant baselines, providing strong evidence for the use of graph neural networks for these tasks.\\n\\n--- \\n--- \\n\\n\\n[1] Jake Topping, Francesco Di Giovanni, Benjamin Paul Chamberlain, Xiaowen Dong, and Michael M. Bronstein. Understanding over-squashing and bottlenecks on graphs via curvature. In International Conference on Learning Representations, 2021.\\n\\n[2] Lukas Fesser and Melanie Weber. Mitigating over-smoothing and over-squashing using augmentations of forman-ricci curvature. In The Second Learning on Graphs Conference, 2023\\n\\n[3] Khang Nguyen, Nong Minh Hieu, Vinh Duc Nguyen, Nhat Ho, Stanley Osher, and Tan Minh Nguyen. Revisiting over-smoothing and over-squashing using ollivier-ricci curvature. In International Conference on Machine Learning, pages 25956\\u201325979. PMLR, 2023.\\n\\n[4] Federico Barbero, Ameya Velingker, Amin Saberi, Michael M Bronstein, and Francesco Di Giovanni.Locality-aware graph rewiring in gnns. In The Twelfth International Conference on Learning Representations, 2023.\\n\\n[5] Wang, Xiyuan, and Muhan Zhang. \\\"How powerful are spectral graph neural networks.\\\" In International conference on machine learning, pp. 23341-23362. PMLR, 2022.\\n\\n[6] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. International Conference on Learning Representations (ICLR), 2017.\\n\\n[7] Veli\\u010dkovi\\u0107, P., Cucurull, G., Casanova, A., Romero, A., Li\\u00f2, P., & Bengio, Y. (2018). Graph Attention Networks. Proc. Int. Conf. Learn. Represent\"}", "{\"metareview\": \"Oversquashing has been one of the central challenges in Graph Neural Networks (GNNs). This phenomenon has been demonstrated in synthetic datasets. In the paper, the authors reassess the potential of curvature-based rewiring to enhance the performance of GNNs on real-world datasets. The study investigates the practical application of these techniques, particularly their effectiveness in mitigating the challenges of oversquashing phenomenon in complex and real-world scenarios.\\n\\nMost of the reviewers agree that the theoretical analyses in the paper are thorough and clear, including a detailed explanation of the bottleneck conditions of curvature rewiring. The experiments are extensive, detailed and convincing, which support the theoretical findings. The presentation of the paper is also clear and easy to follow.\\n\\nWhile there was a concern that the paper presents new insights into curvature-based rewiring, rather than proposing a new solution, model, or method, most of the reviewers and I believe that these new insights are useful for the community. As a consequence, I recommend accepting the paper.\\n\\nThe authors are encouraged to incorporate the feedbacks and comments of the reviewers into the revision of their paper.\", \"additional_comments_on_reviewer_discussion\": \"Please refer to the metareview.\"}", "{\"comment\": \"Thank the authors for their detailed response. While most of my concerns have been answered, I still hold the perspective that this paper presents a problem with new insights, not a new solution, model, or method. As the primary field is selected as \\\"learning on graphs and other geometries & topologies\\\", I expect new learning methodologies rather than re-evaluation (more suitable for the \\\"datasets and benchmarks\\\" primary track). My scores keep the same.\"}", "{\"summary\": \"This paper reconsiders the effect of rewiring according to curvature on GNN effects. Through a large number of experiments, it points out that the rewiring does not meet the identification criteria of the message-passing bottleneck in the figure, and the effect is not significantly improved under a large number of hyperparameter attempts.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This article gives a very detailed introduction to curvature rewiring, which is very helpful for readers who are new to the field to understand the work.\\n2. This paper explains the bottleneck conditions of curvature rewiring from the theoretical point of view and verifies the effect of various methods through experiments, which is very convincing.\\n3. The paper proves its point through a large number of experiments, which show that the existing methods are ineffective in solving the problem.\", \"weaknesses\": \"1. The author merely presents a problem, not a solution to it. The work lacks sufficient integrity.\\n2. There are some problems with the selection of datasets. For example, MUTAG and PROTEIN, which are themselves molecules and proteins, have biochemical implications. Therefore, performance may not be significantly improved after rewiring. For different areas of graph data, we need to be more profound.\\n3. The baseline of the experiment needs to be increased. In the task of node classification, the importance of node characteristics is very important. Therefore, the authors need to add an MLP as a baseline. If you can't explain the effectiveness of the edge in this task, then you can't fully explain the problem of rewiring.\\n\\nOverall, I found this paper a very theoretically solid work. I would like to reconsider my score if my concerns could be addressed.\", \"questions\": \"1. Can the author provide a comparison of experimental results without using edges? (an MLP with the same number of layers) In general, in a node classification task, node characteristics play a crucial role in the classification effect, often sometimes the role of edges is not obvious.\\n2. For Other questions please see Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for their detailed response to my questions and concerns.\\n\\nWhile I do support acceptance of the paper, I am hesitant to increase my score to an 8 at this point and prefer to leave it at a 6.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Dear Reviewer,\\n \\nWe understand that the rebuttal period is a particularly busy time for reviewers, but we would kindly like to ask the reviewer again if our response has thoroughly adressed all their questions. We hope that, if this the case, the reviewer will consider updating their score of our submission.\\n \\nConsidering the deadlines of the conference we have already modified our submission with the additional experiments (see appendix C), and have adapted the discussion section to better reflect the general use of curvature in Graph Neural Networks as requested by the reviewer.\\n \\nThank you again for your time and questions.\\n \\nBest regards,\\n\\nThe authors\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe hope our response has thoroughly addressed your questions.\\n\\nWe would greatly appreciate any further feedback or additional questions you may have.\\n\\nPlease let us know.\\n\\nThank you once again for your thoughtful insights and consideration.\\n\\nBest regards, \\n\\nAuthors\"}", "{\"summary\": \"This paper revisits the effectiveness of curvature-based graph rewiring techniques on real-world datasets. The authors reveal that the identified bottlenecked edges are not in line with theoretical criteria identifying bottlenecks, which thus do not necessarily oversquash the information. Furthermore, the authors demonstrate that the improved accuracies of rewiring techniques on these datasets are outliers originating from sweeps of hyperparameters\\u2014both the ones for training and dedicated ones related to the rewiring algorithm\\u2014instead of consistent performance gains. They further nuances the effectiveness of curvature-based rewiring in real-world datasets and bring a new perspective on the methods to evaluate GNN improvements\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. This paper reveals an important issue in the evaluation of graph rewiring techniques, especially for curvature-based graph rewiring.\\n2. The theoretical analysis is thorough and solid.\", \"weaknesses\": \"1. The analysis and empirical study are specific to the curvature-based method (Topping et al., 2021).\\n2. As the over-squashing issue is highly related to the long-range dependency, the work doesn't include the long-range graph benchmark (Dwivedi et al., 2022), which a bit weakens the study and analysis.\\n3. The analysis is only on GCN (Kipf & Welling, 2016). It will be more comprehensive to include other widely used MPNNs, e.g., GraphSAGE (Hamilton et al., 2017), GatedGCN (Bresson & Laurent, 2018), GAT (Veli\\u010dkovi\\u0107 et al., 2018). This can help better understand the impact of MPNNs on the performance of graph rewiring techniques. \\n\\n\\n\\n\\n\\n\\n\\n\\n\\n------- \\n- Dwivedi, V. P., Ramp\\u00e1\\u0161ek, L., Galkin, M., Parviz, A., Wolf, G., Luu, A. T., & Beaini, D. (2022). Long Range Graph Benchmark. _Adv. Neural Inf. Process. Syst. Track Datasets Benchmarks_.\\n- Bresson, X., & Laurent, T. (2018). Residual Gated Graph ConvNets. In _arXiv:1711.07553_.\\n- Hamilton, W., Ying, Z., & Leskovec, J. (2017). Inductive Representation Learning on Large Graphs. _Adv. Neural Inf. Process. Syst._, _30_. \\n- Veli\\u010dkovi\\u0107, P., Cucurull, G., Casanova, A., Romero, A., Li\\u00f2, P., & Bengio, Y. (2018). Graph Attention Networks. _Proc. Int. Conf. Learn. Represent._\", \"questions\": \"No further questions beyond weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to thank the reviewer for their feedback and insightful comments. We have responded to them point by point below.\\n\\n**1. Presentation: pages 8 and 9 could benefit from some reorganization, for example by better integrating table 2 and figure 2 into the text. Table 2 could also benefit from the best-performing setting being highlighted. I also find Figure 3 hard to read: perhaps it would be better to split the figure in two, i.e. have one figure with the curvature distributions and one with the mean test accuracies.\", \"concerning_table_2\": \"we will adapt this in the final version of the manuscript together with a reorganisation of the results section of our manuscript.**\", \"regarding_figure_2\": \"Both the distributions and the boxenplots display the same information in a complementary manner. The boxenplots are helpful in recognizing performance outliers while the distributions provide a more global view of performances over the sweep of hyperparameters. The distributions of the curvature values are included in the Appendix A of our manuscript. They provide a supporting argument for the fact that most edges selected during the rewiring process do not satisfy the conditions from Theorem 4 (as the distributions of curvature values show that not enough edges are negative enough).\\n\\n**2. More nuanced discussion: while this may be a minor point, I would suggest that the authors include a sentence or two about the role of curvature in graph machine learning more broadly in their discussion section, for example by referring to [1]. While I welcome that the paper pushes back against curvature-based rewiring and consider it good science, this does not mean that curvature is generally not useful for GNNs.**\\n\\nWe agree with the reviewer that curvature in general can be useful for Graph Neural Networks. In the final version, we will revise the discussion sections to include this more nuanced point of view.\\n\\n**3. Long-range datasets: again a minor point, but additional graph-level datasets would further strengthen the paper's message. The authors could, for example, look at Peptides-func and Peptides-struct in the LRGB datasets [2].**\\n\\nWe agree with the referee that additional robustness checks are valuable to increase the validity of our findings. Computational limitations restrained us from performing the full hyperparameters analysis for the \\u201cPascalVOC-SP\\u201d dataset from LRGB [2], however, we performed the theorem analysis for this dataset. We find that only 30% of selected edges during rewiring satisfy condition 2b, consistent with our previous findings. This highlights that even in these long-range datasets chosen edges to rewire are not necessarily responsible for oversquashing and (discrete) curvature-based rewiring methods will most likely not yield performance gains on this dataset.\\n| Dataset | Edges Rewired | Condition 2 | Condition 2b |\\n|:---------------------------|:-------------------------:|:---------------------:|:----------------------------:|\\n| PascalVOC-SP | 2 259 482 | 0 (0%) | 674 596 (29.86 %)\", \"concerning_the_datasets_used_in_our_work\": \"as the main goal of our work was to re-evaluate the performance of curvature based rewiring we indeed focused on the datasets presented in the original work [1], which were used to assess the validity of the rewiring algorithm. We, therefore, believe that these are then also warranted to re-evaluate its effectiveness and constitute the most transparent approach for this. Our experiments relating to edges satisfying the conditions from Theorem 4 [1] in section 3 then show that these datasets do not possess edges that necessarily oversquash information. Given the impact of the original work [1], we are confident that these results are substantial enough to be presented.\"}", "{\"comment\": \"**3. I don\\u2019t think homophily is the only factor that determines the long-rangeness of the task. It should depend on the graph topology (e.g., diameter), node features, and the nature of the task as well. Although this is not necessarily the focus of the paper, discussion about this can be made more precise.**\\n\\nWe agree with the reviewer that the discussion on the long-rangeness of a task should be more elaborate and not only refer to homophily, which is indeed not the only determining factor. In the final version, we will revise the discussion sections to include this more nuanced point of view.\\n\\n\\n--- \\n--- \\n\\n\\n[1] Topping, J., Di Giovanni, F., Chamberlain, B. P., Dong, X. & Bronstein, M. M. Understanding over-squashing and bottlenecks on graphs via curvature in International Conference on Learning Representations (2021). 1\\u20133, 7\\u20139\\n\\n[2] Dwivedi, V. P., Ramp\\u00e1\\u0161ek, L., Galkin, M., Parviz, A., Wolf, G., Luu, A. T., & Beaini, D. (2022). Long Range Graph Benchmark. Adv. Neural Inf. Process. Syst. Track Datasets Benchmarks.\\n\\n[3] Khang Nguyen, Nong Minh Hieu, Vinh Duc Nguyen, Nhat Ho, Stanley Osher, and Tan Minh Nguyen. Revisiting over-smoothing and over-squashing using ollivier-ricci curvature. In International Conference on Machine Learning, pages 25956\\u201325979. PMLR, 202\\n\\n[4] Veli\\u010dkovi\\u0107, P., Cucurull, G., Casanova, A., Romero, A., Li\\u00f2, P., & Bengio, Y. (2018). Graph Attention Networks. Proc. Int. Conf. Learn. Represent\\n\\n[5] Hamilton, W., Ying, Z., & Leskovec, J. (2017). Inductive Representation Learning on Large Graphs. Adv. Neural Inf. Process. Syst., 30.\"}", "{\"comment\": \"We would like to thank the reviewer for their time and thoughtful comments. We respond to them point-by-point below\\n\\n**1. The author merely presents a problem, not a solution to it. The work lacks sufficient integrity.**\\n\\nDue to the nature of our paper, we agree with the reviewer that our work does not explicitly demonstrate any new methodologies when it comes to rewiring. The focus in our paper is on the performance of curvature-based rewiring on datasets that are used as benchmarks in other rewiring papers instead of synthetic datasets, where bottlenecks can be artificially induced. In doing so we aim to demonstrate that rewiring often fails to meet theoretical conditions, causing the observed improvements to not be theoretically justified. This questions the effectiveness of curvature-based rewiring on improving GNN performances. To explain the results obtained in previous works, we find that the reported improvements may originate from outliers in the hyperparameter sweeps, which can occasionally show high accuracies, instead of consistent performance gains due to rewiring. When comparing the distributions of the results occurring in these sweeps, we show that these outliers occur also when no rewiring has taken place. This observation, together with the previous theoretical analysis, raises questions about the effectiveness of rewiring, aligning with our finding that there is a mismatch between theoretical expectations and experimental results when rewiring edges.\\n\\nThe message of our work is twofold. First, it serves as a re-evaluation of curvature-based rewiring methods, and can importantly influence further development in GNN as we argue that theoretical results and experiments should be more closely checked. Secondly, we advocate for a new and different approach when evaluating methods, where attention is paid to the influence of sweeping hyperparameters. \\n\\nAdditionally, we would greatly appreciate it if the reviewer could kindly elaborate on what they mean by the lack of integrity, so that we can address their concerns more effectively.\\n\\n**2.There are some problems with the selection of datasets. For example, MUTAG and PROTEIN, which are themselves molecules and proteins, have biochemical implications. Therefore, performance may not be significantly improved after rewiring. For different areas of graph data, we need to be more profound.**\\n\\nThe node-classification datasets and graph-classification tasks (MUTAG and PROTEINS) are all datasets which have been used in prior work [1, 2, 3, 4]. Therefore, we believe that is warranted to re-evaluate the effectiveness of curvature based rewiring methods on these datasets. \\n\\nHowever, we agree that certain tasks might not depend on long-range interactions. In lines 495-509 of our initial submission, we elaborate on citation networks (predicting the field of study of a paper might only need the direct neighbors) and also on noise introduced by added edges, allowing distant nodes to communicate even though they should not. In the final manuscript, we would be happy to add a discussion on MUTAG and PROTEINS and their biochemical implications as well.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe hope our response has thoroughly addressed your questions.\\n\\nWe would greatly appreciate any further feedback or additional questions you may have.\\n\\nPlease let us know.\\n\\nThank you once again for your thoughtful insights and consideration.\\n\\nBest regards, \\n\\nAuthors\"}", "{\"comment\": \"We would like to thank the AC for securing reviews with high quality and would like to thank the reviewers for the detailed questions and very thoughtful comments, which helped us highlight better the key contribution of our work.\\n\\n**Our responses to the Reviewers\\u2019 main questions are summarized below.**\\n\\n* We extended our work to two other graph neural network architectures: GAT [1] and GraphSAGE [2] on three datasets we originally used (Texas, Cora and Chameleon). The datasets are a representative sample of the datasets used throughout our work. These experiments confirmed that performance increases when rewiring are due to hyperparameter tuning instead of rewiring itself.\\n\\n\\n* We ran our analysis from section 3 as well on the \\u201cPascalVOC-SP\\u201d dataset from the LRGB [3] datasets which was brought up by different reviewers. We find here as well that only 30% of selected edges during rewiring satisfy condition 2b of Theorem 4 [4], highlighting that even in these long-range datasets edges chosen to rewire around are not responsible for oversquashing. \\n\\n\\n* We highlighted again how our theoretical results relate to other curvature-based rewiring papers such as BORF [5].\\n\\n**We welcome any follow-up questions from the reviewers regarding our rebuttal. We hope that, based on our detailed responses, the reviewers will consider increasing their scores if their concerns have been sufficiently addressed.**\\n\\n---\\n--- \\n\\n[1] Veli\\u010dkovi\\u0107, P., Cucurull, G., Casanova, A., Romero, A., Li\\u00f2, P., & Bengio, Y. (2018). Graph Attention Networks. Proc. Int. Conf. Learn. Represent\\n\\n[2] Hamilton, W., Ying, Z., & Leskovec, J. (2017). Inductive Representation Learning on Large Graphs. Adv. Neural Inf. Process. Syst., 30. \\n\\n[3] Dwivedi, V. P., Ramp\\u00e1\\u0161ek, L., Galkin, M., Parviz, A., Wolf, G., Luu, A. T., & Beaini, D. (2022). Long Range Graph Benchmark. Adv. Neural Inf. Process. Syst. Track Datasets Benchmarks.\\n\\n[4] Topping, J., Di Giovanni, F., Chamberlain, B. P., Dong, X. & Bronstein, M. M. Understanding over-squashing and bottlenecks on graphs via curvature in International Conference on Learning Representations (2021). 1\\u20133, 7\\u20139\\n\\n[5] Khang Nguyen, Nong Minh Hieu, Vinh Duc Nguyen, Nhat Ho, Stanley Osher, and Tan Minh Nguyen. Revisiting over-smoothing and over-squashing using ollivier-ricci curvature. In International Conference on Machine Learning, pages 25956\\u201325979. PMLR, 202\"}", "{\"comment\": \"We want to thank the reviewer for their feedback and their questions. We respond point by point here below:\\n\\n**1. The analysis and empirical study are specific to the curvature-based method (Topping et al., 2021).**\\n\\nThe main focus of our work was to re-analyse the effectiveness of discrete curvature-based rewiring methods. One advantage of those measures, in contrast with other methods such as spectral-based rewiring, is that they can be linked directly to the message passing and local information bottlenecks, as demonstrated in Theorem 4, instead of relying on global measures of the graph that might not reflect the local bottlenecks accurately. Due to Theorem 2 from [1], we know that the Balanced Forman curvature is a lower bound for the Ollivier (Ricci) Curvature. Therefore, in lines 184-188, our initial submission already included a section in how our results could apply to other curvature-base rewiring techniques such as BORF [2]\\n\\n**2. As the over-squashing issue is highly related to the long-range dependency, the work doesn't include the long-range graph benchmark (Dwivedi et al., 2022), which a bit weakens the study and analysis.**\\n\\nWe agree with the referee that additional robustness checks are valuable to increase the validity of our findings. Computational limitations restrained us from performing the full hyperparameters analysis for the \\u201cPascalVOC-SP\\u201d dataset from LRGB [3], however, we performed the theorem analysis for this dataset. We find that only 30% of selected edges during rewiring satisfy condition 2b, consistent with our previous findings. This highlights that even in these long-range datasets chosen edges to rewire are not necessarily responsible for oversquashing and (discrete) curvature-based rewiring methods will most likely not yield performance gains on this dataset.\\n\\n| Dataset | Edges Rewired | Condition 2 | Condition 2b |\\n|:---------------------------|:-------------------------:|:---------------------:|:----------------------------:|\\n| PascalVOC-SP | 2 259 482 | 0 (0%) | 674 596 (29.86 %)\", \"concerning_the_datasets_used_in_our_work\": \"as the main goal of our work was to re-evaluate the performance of curvature based rewiring we indeed focused on the datasets presented in the original work [1], which were used to assess the validity of the rewiring algorithm. We, therefore, believe that these are then also warranted to re-evaluate its effectiveness and constitute the most transparent approach for this. Our experiments relating to edges satisfying the conditions from Theorem 4 [1] in section 3 then show that these datasets do not possess edges that necessarily oversquash information. Given the impact of the original work [1], we are confident that these results are substantial enough to be presented.\"}", "{\"comment\": \"Thank you for the careful rebuttal.\\n\\nI believe the authors have addressed most of my concerns. The challenge to the foundation of curvature-based rewiring techniques is meaningful to the community.\\n\\nI will raise the score to 6 accordingly.\"}" ] }
Ecb6HBoo1r
Deciphering Cell Lineage Gene Regulatory Network via MTGRN
[ "Rui Peng", "Wei Wu", "Jinzhuo Wang" ]
Gene regulatory network (GRN) inference is crucial for cell fate decision, as it outlines the regulations between genes, which direct cell differentiation. Although there have been some work to infer cell lineage GRN, they fail to capture the continuous nature of the differentiation process as they group cells by cell type or cluster and infer GRN in a discrete manner. In this paper, we hypothesize GRN can forecast future gene expression based on history information and transform the inference process into a multivariate time series forecasting problem, linking cells at different time to learn temporal dynamics and inferring GRN in a continuous process. We introduce MTGRN, a transformer-based model that only takes single cell data as input to infer the cell lineage GRN by forecasting gene expression. MTGRN consists of temporal blocks and spatial blocks, effectively captures the connections between cells along their developmental trajectories and leverages prior knowledge to elucidate regulatory interactions among genes. It significantly outperforms six other methods across five datasets, demonstrating superior performance even compared to multimodal approaches. Based on the inferred GRN, MTGRN pinpoints three crucial genes associated with the development of mouse embryonic stem cells and depicts the activity changes of these genes during cellular differentiation. Beyond this, MTGRN is capable of conducting perturbation experiments on key genes and accurately modeling the change of cell identity following the knockout of the Gata1 in mouse hematopoietic stem cells.
[ "Gene regulatory network", "Time series", "In silico perturbation" ]
Reject
https://openreview.net/pdf?id=Ecb6HBoo1r
https://openreview.net/forum?id=Ecb6HBoo1r
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yv6Szz9cKi", "yiyMI4eheo", "vAqwz7FtiW", "uA2PHqyxWr", "u45RAcuBzR", "sXFCZXRM8E", "pDRNTheNuU", "lMHJ5l6ipF", "kuaxd4ehqK", "kBaDK1YhwB", "ijOFwVU5fj", "gxZ7B3iZfD", "gORLie93L6", "d8aGU0kfAG", "d01vN8zJTy", "aHcw4R128a", "aA2jzZVOpS", "ZlJPceKkC5", "Qpf73qKe2A", "OXTm1UFdCl", "LtBq4NWUKc", "LhPwu4ZNVC", "HneRts1tfy", "GYWciwXQuX", "ET7p0j7VZW", "EMJVN3l6tM", "E0I6Dqbo1k", "8WGbXbom1h", "5jckgxpwLQ", "2WKhw3Y1e6" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "meta_review", "official_review" ], "note_created": [ 1731549579262, 1732512271322, 1732772538287, 1732549933463, 1731565588543, 1732553458633, 1731743260991, 1731589508557, 1732637112859, 1731490235299, 1731572453599, 1732208031887, 1731589198511, 1732413125293, 1732779028747, 1731746743019, 1732533351018, 1730632779724, 1732549500475, 1732515655164, 1730220042551, 1732512721715, 1730690094417, 1732503709129, 1731953998700, 1731695229707, 1731645655542, 1737523704021, 1734536476789, 1729799638254 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5401/Authors" ], [ "ICLR.cc/2025/Conference/Submission5401/Authors" ], [ "ICLR.cc/2025/Conference/Submission5401/Authors" ], [ "ICLR.cc/2025/Conference/Submission5401/Authors" ], [ "ICLR.cc/2025/Conference/Submission5401/Authors" ], [ "ICLR.cc/2025/Conference/Submission5401/Authors" ], [ "ICLR.cc/2025/Conference/Submission5401/Authors" ], [ "ICLR.cc/2025/Conference/Submission5401/Authors" ], [ "ICLR.cc/2025/Conference/Submission5401/Reviewer_rGeJ" ], [ "ICLR.cc/2025/Conference/Submission5401/Authors" ], [ "ICLR.cc/2025/Conference/Submission5401/Authors" ], [ "ICLR.cc/2025/Conference/Submission5401/Reviewer_rGeJ" ], [ "ICLR.cc/2025/Conference/Submission5401/Authors" ], [ "ICLR.cc/2025/Conference/Submission5401/Authors" ], [ "ICLR.cc/2025/Conference/Submission5401/Reviewer_G2id" ], [ "ICLR.cc/2025/Conference/Submission5401/Authors" ], [ "ICLR.cc/2025/Conference/Submission5401/Authors" ], [ "ICLR.cc/2025/Conference/Submission5401/Reviewer_G2id" ], [ "ICLR.cc/2025/Conference/Submission5401/Reviewer_rGeJ" ], [ "ICLR.cc/2025/Conference/Submission5401/Authors" ], [ "ICLR.cc/2025/Conference/Submission5401/Reviewer_rGeJ" ], [ "ICLR.cc/2025/Conference/Submission5401/Authors" ], [ "ICLR.cc/2025/Conference/Submission5401/Reviewer_e1zu" ], [ "ICLR.cc/2025/Conference/Submission5401/Reviewer_e1zu" ], [ "ICLR.cc/2025/Conference/Submission5401/Reviewer_wyvQ" ], [ "ICLR.cc/2025/Conference/Submission5401/Reviewer_wyvQ" ], [ "ICLR.cc/2025/Conference/Submission5401/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5401/Area_Chair_bXx8" ], [ "ICLR.cc/2025/Conference/Submission5401/Reviewer_wyvQ" ] ], "structured_content_str": [ "{\"comment\": \"We would like to clarify the perturbation experiments mentioned by reviewer in weakness section. Actually, it serve as an important technical validation of the inferred gene regulatory network. Similar approaches can be found in studies like Celloracle [1]. In fact, without perturbation analysis, it would be difficult to validate the inferred network\\u2019s accuracy. It is through the perturbation of key factors that the robustness of the network can be definitively proven.\\n\\nIn Figure 7 (a), our TF perturbation analysis is entirely based on the GRN inferred by our model (MTGRN). If the inferred network were incorrect, the calculated offset vectors for each cell would deviate in the wrong direction. However, by performing perturbations on the network constructed by MTGRN, we observed that knocking out Gata1 (a well-known transcription factor promoting erythroid development) causes the offset vectors of developing cells to shift in the opposite direction of the developmental trajectory. This strongly supports the accuracy of our model\\u2019s GRN inference and serves as a robust technical validation. It is an important technical discussion for analyzing model performance, and we do not fully agree with the reviewer\\u2019s perspective on this point. We hope to have further discussion.\\n\\nWe greatly appreciate the opportunity for this rebuttal and hope our responses address the reviewer\\u2019s concerns. Thanks once again!\\n\\n[1] Kamimoto K, Stringa B, Hoffmann C M, et al. Dissecting cell identity via network inference and in silico gene perturbation[J]. Nature, 2023, 614(7949): 742-751.\"}", "{\"title\": \"Anticipation of your response.\", \"comment\": \"We greatly value the feedback provided by reviewers and sincerely apologize for any lack of clarity in presenting our work during the initial submission, which may have caused confusion. In the rebuttal phase, we have addressed the issues you raised in detail and the additional experiments you mentioned have been included in the supplementary material for your reference. We hope these clarifications and supplementary analyses will help you evaluate the significance of our study comprehensively.\\n\\nWe are pleased to note that during the rebuttal phase, we successfully addressed the concerns of other reviewers, which led to a positive reevaluation and corresponding score improvements. We genuinely hope our detailed and thoughtful response will also resolve your concerns and help improve the score of our paper. We sincerely look forward to your feedback and thank you again!\"}", "{\"title\": \"Anticipation of your response\", \"comment\": \"We are looking forward to receive feedback from the reviewer G2id.\"}", "{\"comment\": \"We are very pleased to have addressed your concerns and hope that the additional experiments we conducted will allow the reviewer to evaluate our work more comprehensively. We hope this discussion will help improve the score of our paper and thank you again!\"}", "{\"comment\": \"We appreciate the reviewer\\u2019s high-quality comments. Below, we will address each of the questions raised.\\n1. In response to the reviewer's question that NicheNet and the ground truth network have a significant overlap. It is a very important point, we will discuss below. First, we proposed a method named **Random** in Table 1 for baseline comparison. In **Random**, we randomly selected the top-k edges (where k is equal to the number of edges in the ground truth) from the prior knowledge as the inferred GRN. As for results, we can find that the AUROC for **Random** across five datasets is around 0.5, indicating there is no significant overlap between the prior knowledge and the ground truth. Second, we analyzed the overlap between the edges provided by NicheNet and the edges in ground truth. The result is presented below:\\n - Prior Knowledge: 5318181 edges\\n - mESC ground truth network: 24557 edges (overlap: 16695)\\n - mHSC-E ground truth network: 24726 edges (overlap: 14732)\\n - mHSC-GM ground truth network: 16198 edges (overlap: 9818)\\n - mHSC-L ground truth network: 4705 edges (overlap: 2494)\\n\\n we can find that NicheNet includes 5318181 edges, while the ground truth edge counts in our datasets are 24557, 24726, 16198, and 4705, with intersections of 16695, 14732, 9818, and 2494 edges, respectively. In each dataset, the effective regulatory edges provided by NicheNet account for only 0.31%, 0.27%, 0.18%, and 0.04% of the entire prior knowledge. This indicates that NicheNet provides a coarse prior network containing considerable noise, including potential false-positive regulatory relationships from databases or experiments. This is also why the AUROC for random is around 0.5 in Table 1. From above, there is no significant overlap between NicheNet and the ground truth network, which is why CEFCON [1] also uses it as prior knowledge. MTGRN should identify approximately 10,000 true regulatory edges from a noisy network of over 5 million edges, making this task challenging. For the second question about GENIE3 and GRNBoost2, we chose to compare our model with them for two key reasons. First, these methods are considered classic in GRN inference, so comparing against them allowed us to benchmark our model's performance against established standards. Second, this comparison emphasize incorporating prior knowledge can enhance GRN inference. Except for GENIE3 and GRNBoost2, the other three baseline methods (NetREX, CEFCON, and Celloracle) all rely on prior networks, in comparison with these three methods, our model consistently demonstrated significant performance gains, showing the effectiveness of our method. **If reviewer has any recommendations for models suitable for comparison, we would be glad to compare our model with them as well**\\n2. In selecting the top k edges, we did not incorporate any gene enrichment information. Using the mESC dataset as an example, MTGRN needs to select the top 24557 edges out of a total of 5318181 edges available in the NicheNet network, the ground truth edges is a tiny fraction of the entire search space, which includes numerous false-positive regulatory interactions that could interfere with network inference. Due to this noise, some unrelated genes may appear more enriched in the prior network than actual regulatory factors. Because MTGRN will predict the gene expression for future M time points based on the former N time points, if an unrelated gene highly enriched in the prior network, it will not effectively fit the expression in the M future cells, so during attention map computation, our model will assigns lower scores to such genes, meaning that even if a gene appears highly enriched, it will still receive a low score if it does not contribute to accurate predictions (beacause they are not the true regulatory factors for the target genes). We can refer to the method **Random** in Table 1 for a clearer explanation. Since **Random** randomly selects edges from the prior network, it is more likely to pick enriched genes. However, it achieved an AUROC close to 0.5 across all five datasets. If our model\\u2019s edge selection were also based on gene enrichment, our scores would be similar to those of **Random**. Instead, MTGRN\\u2019s accuracy across all five datasets is significantly higher than **Random**, demonstrating that our selection of the top-k edges is not based on gene enrichment.\\n\\n[1] Wang P, Wen X, Li H, et al. Deciphering driver regulators of cell fate decisions from single-cell transcriptomics data with CEFCON[J]. Nature Communications, 2023, 14(1): 8459.\\n\\n**Due to the character limit, we will continue addressing the remaining questions in the next comment.**\"}", "{\"title\": \"Anticipation of your response\", \"comment\": \"We greatly value the feedback provided by reviewers and sincerely apologize for any lack of clarity in presenting our work during the initial submission, which may have caused confusion. In the rebuttal phase, we have addressed the issues you raised in detail and the additional experiments you mentioned have been included in the supplementary material for your reference. We hope these clarifications and supplementary analyses will help you evaluate the significance of our study comprehensively.\\n\\nWe are pleased to note that during the rebuttal phase, we successfully addressed the concerns of other reviewers, which led to a positive reevaluation and corresponding score improvements. We genuinely hope our detailed and thoughtful response will also resolve your concerns and help improve the score of our paper. We sincerely look forward to your feedback and thank you again!\"}", "{\"comment\": \"We sincerely thank reviewer for their response. we will answer the questions below.\\n1. Regarding the first question, *\\\"Did you also restrict the edges selected by GENIE3, GRNBoost2, etc., to those from NicheNet?\\\"*, in the submitted version, we did not intersect the edges predicted by GENIE3 and GRNBoost2 with those in NicheNet. We agree with the reviewer's suggestion that performing such an intersection would provide a more appropriate comparison. Consequently, we conducted this experiment, and the AUROC scores of these two methods after intersecting with NicheNet on the five datasets are as follows (the numbers in parentheses indicate the original scores.):\\n\\n | | hHep | mESC | mHSC-E | mHSC-GM | mHSC-L |\\n |:-----------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|\\n | GRNBoost2 | 0.589 (0.578) | 0.562 (0.548) | 0.413 (0.385) | 0.473 (0.450) | 0.534 (0.515) |\\n | GENIE3 | 0.503 (0.481) | 0.552 (0.531) | 0.378 (0.350) | 0.442 (0.419) | 0.498 (0.486) |\\n | MTGRN(ours) | 0.664 | 0.713 | 0.485 | 0.765 | 0.583 |\\n\\n It can be observed that the AUROC scores of GENIE3 and GRNBoost2 improved after intersecting with NicheNet, but the improvement is not significant. This is because the prior knowledge provided by NicheNet contains many false-positive edges. As a result, intersecting with NicheNet still retains these incorrectly predicted edges. This further validates that the prior network we used is a very coarse network.\\n\\n2. Regarding the second question, *\\\"What are the results of your method if you remove NicheNet entirely from the pipeline?\\\"* The AUROC scores of our method on the five datasets after removing the prior knowledge are as follows (the numbers in parentheses indicate the original scores):\\n\\n | | hHep | mESC | mHSC-E | mHSC-GM | mHSC-L |\\n |:-----:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|\\n | MTGRN | 0.631 (0.664) | 0.692 (0.713) | 0.431 (0.485) | 0.738 (0.765) | 0.557 (0.583) |\\n\\n We found that without the prior knowledge network, the performance of our model decreased, but the decline was not significant. Moreover, our model still achieved the best results on the hHep, mESC, and mHSC-GM datasets, even surpassing methods like NetREX, Celloracle, and CEFCON that utilize prior knowledge.\\n\\n3. Regarding the third question, *\\\"don't the overlap values you present mean that there are edges in the ground truth graphs that are not part of NicheNet, therefore never picked by your method?\\\"* This is indeed correct. Since the prior knowledge network does not contain all the edges in the ground truth, the GRN inferred by the model will not include those missing edges. This is a limitation inherent to methods that rely on prior knowledge networks, including Celloracle and CEFCON. However, we want to emphasize that this phenomenon only affects the model's performance since there will always be some ground truth edges that the model cannot predict, which in turn lowers its overall performance. Taken together, the use of the NicheNet network does not significantly enhance the model's performance.\\n\\n4. For the fourth question, *\\\"It is also unclear what the overlap is between the ground truth and the prior information in the other baselines (NetREX, CEFCON, and Celloracle) that use such prior information,\\\"* we want to emphasize that the prior networks used for comparison with NetREX and CEFCON are both derived from NicheNet. However, for Celloracle, as it constructs its base GRN using ATAC-seq data for various species, we calculated Celloracle's scores using its own prior network. Next, we will analyze and report the overlap between Celloracle's prior network and the ground truth, with the results provided below:\\n\\n | | mESC | mHSC-E | mHSC-GM | mHSC-L |\\n |:-:|:---------------:|:---------------:|:--------------:|:-------------:|\\n | | 5490099 (31868) | 5490099 (13966) | 5490099 (9498) | 5490099(3178) |\\n\\n It can be observed that the number of edges in Celloracle's prior network and the overlap with the ground truth network are similar to those of NicheNet, demonstrating the rationale of using NicheNet as the prior.\\n\\n**Due to the character limit, we will continue addressing the remaining questions in the next comment.**\"}", "{\"comment\": \"the reference links are listed below:\\n\\n[1] Pratapa A, Jalihal A P, Law J N, et al. Benchmarking algorithms for gene regulatory network inference from single-cell transcriptomic data[J]. Nature methods, 2020, 17(2): 147-154.\\n\\n[2] Nestorowa S, Hamey F K, Pijuan Sala B, et al. A single-cell resolution map of mouse hematopoietic stem and progenitor cell differentiation[J]. Blood, The Journal of the American Society of Hematology, 2016, 128(8): e20-e31.\\n\\n[3] Hayashi T, Ozaki H, Sasagawa Y, et al. Single-cell full-length total RNA sequencing uncovers dynamics of recursive splicing and enhancer RNAs[J]. Nature communications, 2018, 9(1): 619.\\n\\n[4] Camp J G, Sekine K, Gerber T, et al. Multilineage communication regulates human liver bud development from pluripotency[J]. Nature, 2017, 546(7659): 533-538.\\n\\n[5] Wang P, Wen X, Li H, et al. Deciphering driver regulators of cell fate decisions from single-cell transcriptomics data with CEFCON[J]. Nature Communications, 2023, 14(1): 8459.\\n\\n[6] Kamimoto K, Stringa B, Hoffmann C M, et al. Dissecting cell identity via network inference and in silico gene perturbation[J]. Nature, 2023, 614(7949): 742-751.\\n\\n[7] Wang L, Trasanidis N, Wu T, et al. Dictys: dynamic gene regulatory network dissects developmental continuum with single-cell multiomics[J]. Nature Methods, 2023, 20(9): 1368-1378.\\n\\nWe are very grateful for your valuable suggestions and hope our responses clarify any concerns. We greatly value this opportunity to discuss our work. If there are any further questions, please feel free to ask, and we will be happy to provide further details. We hope this discussion will help improve the score of our paper, and thank you again!\"}", "{\"comment\": \"I appreciate the sincere effort the authors put in to address comments from all reviewers. The network predictions look robust compared to the first version. However I am keeping my score same (6). The current contribution of the paper does not demand a higher score in my opinion.\"}", "{\"comment\": \"We sincerely appreciate the reviewer\\u2019s thoughtful comments. we will address each of the questions below.\\n\\n1. Regarding the question of whether prior knowledge plays a significant role in our model's performance, actually we proposed a method named **Random** in Table 1 for baseline comparison. In **Random**, we randomly selected the top-k edges (where k is equal to the number of edges in the ground truth) from the prior knowledge as the inferred GRN. As for results, we can find that the AUROC for **Random** across five datasets is around 0.5, significantly lower than that of our model (MTGRN), indicating that prior knowledge alone is not the reason for our model\\u2019s high performance.\\n2. Methods we compared against in Table 1 also rely on prior knowledge to infer GRNs. For example, CellOracle [1] uses ATAC-seq data to construct a base GRN, providing prior knowledge of TF-target relationships. CEFCON [2] also relies on NicheNet as prior information, applying a graph neural network to infer the GRN. For fairness, the baselines we chose also incorporate prior knowledge, and our model achieved higher performance in these comparisons, further supporting that our model\\u2019s performance is not due to the use of prior knowledge.\\n3. We analyzed the overlap between the edges provided by NicheNet and the edges in ground truth. The result is presented below:\\n - Prior Knowledge: 5318181 edges\\n - mESC ground truth network: 24557 edges (overlap: 16695)\\n - mHSC-E ground truth network: 24726 edges (overlap: 14732)\\n - mHSC-GM ground truth network: 16198 edges (overlap: 9818)\\n - mHSC-L ground truth network: 4705 edges (overlap: 2494)\\n\\n we can find that NicheNet includes 5318181 edges, while the ground truth edge counts in our datasets are 24557, 24726, 16198, and 4705, with intersections of 16695, 14732, 9818, and 2494 edges, respectively. In each dataset, the effective regulatory edges provided by NicheNet account for only 0.31%, 0.27%, 0.18%, and 0.04% of the entire prior knowledge. This indicates that NicheNet provides a coarse prior network containing considerable noise, including potential false-positive regulatory relationships from databases or experiments. This is also why the AUROC for random is around 0.5 in Table 1. MTGRN should identify approximately 10,000 true regulatory edges from a noisy network of over 5 million edges, making this task challenging and countering the concern that our high performance is due to the strength of prior knowledge (actually the information provided by prior is little).\\n\\n4. In response to the reviewer's question that what are the technical aspects of the model architecture that contribute most to the performance? we emphasize that MTGRN\\u2019s contribution is reframing GRN inference as a multivariate time series forecasting problem, learning Granger causality [3] in cell development. Our model\\u2019s ability stems from effectively utilizing the temporal information inherent in cell development. As shown in Figure 2, we first apply a ***Temporal Attention Module*** that allows each cell to focus on gene expression information from earlier developmental stages. At this module, each gene of the cell will attend the expression of all genes. However, each gene is typically influenced by only a subset of regulatory factors. We therefore apply the ***Spatial Attention Module***, which leverages prior knowledge to filter out gene pairs without regulatory relationships (although this filtering is somewhat coarse due to noise in the prior knowledge). Combining these two modules allows us using the historical expression information of candidate TFs to predict a gene\\u2019s future expression. This is the core factor improving MTGRN\\u2019s performance.\\n5. In response to the reviewer's question that what aspects should others try to build on? we believe building GRNs using methods related to RNA velocity or single-cell trajectory inference could yield additional insights. Incorporating other modalities, such as ATAC-seq data like CellOracle, could also strengthen the accuracy of prior knowledge, which avoids the noise present in NicheNet.\\n\\nWe are very grateful for your valuable suggestions and hope our responses clarify any concerns. We greatly value this opportunity to discuss our work. If there are any further questions, please feel free to ask, and we will be happy to provide further details. We hope this discussion will help improve the score of our paper, and thank you again!\\n\\n[1] Wang P, Wen X, Li H, et al. Deciphering driver regulators of cell fate decisions from single-cell transcriptomics data with CEFCON[J]. Nature Communications, 2023, 14(1): 8459.\\n\\n[2] Kamimoto K, Stringa B, Hoffmann C M, et al. Dissecting cell identity via network inference and in silico gene perturbation[J]. Nature, 2023, 614(7949): 742-751.\\n\\n[3] Shojaie A, Fox E B. Granger causality: A review and recent advances[J]. Annual Review of Statistics and Its Application, 2022, 9(1): 289-319.\"}", "{\"comment\": \"3. In response to reviewer\\u2019s comment that several variables are not defined in the paper. We sincerely apologize for the oversight in explaining some of the symbols, we will provide a detailed explanation below. $X_{\\\\text{input}} \\\\in \\\\mathbb{R}^{G \\\\times W \\\\times d}$ will go through three independent linear transformation to obtain Q, K, V, they are all have shape of $G \\\\times W \\\\times d$. To calculate the attention matrix, we use the formula:$\\\\frac{Q K^\\\\top}{\\\\sqrt{d}}$, the result is an attention matrix with dimensions $G \\\\times W \\\\times W$. As for the second quesiton, in **Section 3.3** of the paper, we mention that\\n> the output of temporal block $X_{output} \\\\in \\\\mathbb{R}^{G \\\\times W \\\\times d_{model}}$ will be transposed to $X_{output} \\\\in \\\\mathbb{R}^{W \\\\times G \\\\times d_{model}}$, which will be input into Spatial attention module alongside prior knowledge\\n\\n The difference in dimensions between the spatial attention module and the temporal attention module, as mentioned by the reviewer, is due to a transposition we applied to the output of the temporal module. We switched the positions of G and W, allowing us to compute the attention between genes in the spatial attention module. Consequently, the attention map in the spatial attention module has dimensions of $G \\\\times G$.\\n4. In response to reviewer\\u2019s comment that already have work using transformers for GRN inference, we want to emphasize that MTGRN\\u2019s contribution is not applying transformer to infer GRN (**we did not mention this as a contribution in our introduction**). Our key contribution is reframing GRN inference as a multivariate time series forecasting problem. MTGRN transforms single-cell data into time sequence data using trajectory inference and then infers GRNs through a time-series prediction approach. The proposed time and spatial modules are designed to learn Granger causality [1] in the cell development process. In the time module, each cell can only attend to cells from prior time points, allowing each gene to observe all genes in preceding cells. However, only a small subset of genes influences the expression of a given target gene. To address this, we use the spatial module to filter out gene pairs without regulatory relationships in the prior knowledge, ensuring that each gene\\u2019s expression is inferred based only on historical expression data of genes with known interactions in prior network. Transformer is merely a tool we use for GRN inference, our main contribution lies in restructuring the GRN inference task as a new paradigm of multivariate time series prediction problem.\\n5. In response to question that why not use time-series scRNA-seq datasets where the time points are given rather than learned? The datasets we use for benchmaring come from BEELINE [2]. For example, in the mESC dataset, each cell is named in a format like \\u201cRamDA_mESC_00h_C01,\\u201d which indicates its collection time, here at 0 hours. The mESC data were collected at five distinct time points (0, 12, 24, 48, and 72 hours). As for why we don\\u2019t use these specific time points directly, there are three main reasons. First, in each collection time point, multiple cells are sequenced to obtain their gene expression data, so these cells will have the same time label (e.g., 00h), which prevents us from establishing a precise temporal order among them. Second, we think cells collected at the same time still have an inherent time order. By assigning a pseudotime to cells collected at same time point, we can establish a more accurate order within each time point. Third, previous methods such as CellOracle [3] inferred GRN by clustering cells of the same type and then constructed a GRN for each cluster. However, we believe that even within the same cell type, there is an inherent developmental progression. Therefore, we use trajectory inference to assign pseudotimes to cells, allowing us to capture this developmental order more effectively.\\n\\nWe are very grateful for your valuable suggestions and hope our responses clarify any concerns. We greatly value this opportunity to discuss our work. If there are any further questions, please feel free to ask, and we will be happy to provide further details. We hope this discussion will help improve the score of our paper, and thank you again!\\n\\n[1] Shojaie A, Fox E B. Granger causality: A review and recent advances[J]. Annual Review of Statistics and Its Application, 2022, 9(1): 289-319.\\n\\n[2] Pratapa A, Jalihal A P, Law J N, et al. Benchmarking algorithms for gene regulatory network inference from single-cell transcriptomic data[J]. Nature methods, 2020, 17(2): 147-154.\\n\\n[3] Kamimoto K, Stringa B, Hoffmann C M, et al. Dissecting cell identity via network inference and in silico gene perturbation[J]. Nature, 2023, 614(7949): 742-751.\"}", "{\"comment\": \"Thank you for clarifying the results on dynamic network results. As pointed out the method is similar to the Dictys paper. Would have loved to see some more validation of the predicted networks.\\nAlso looking forward to perturbation of experiments on co-regulated genes in future!\"}", "{\"comment\": \"We sincerely appreciate the reviewer\\u2019s thoughtful comments. we will address each of the questions below.\\n1. In response to reviewer's comments that the experimental analysis is based on data from different cell lines rather than dynamic or developmental data. We want to emphasize that **all the data we use is dynamic developmental data**, reviewer could refer to BEELINE [1] for more context. It provide detailed descriptions of the five datasets and we summarize the main details in BEELINE below :\\n - mHSC dataset is sourced from [2] and includes 1,656 hematopoietic stem and progenitor cells (HSPCs), which can be divided into three lineages: mHSC-E (erythroid lineage), mHSC-L (lymphoid lineage), and mHSC-GM (myeloid lineage). Each lineage includes all cells in the progression from the starting cell to the endpoint cell.\\n - mESC dataset is sourced from [3], contains scRNA-seq results from 421 cells that track the development of mouse embryonic stem cells into primitive endoderm cells. These cells were collected at five distinct time points: 0, 12, 24, 48, and 72 hours.\\n - hHep dataset is from [4] and includes scRNA-seq results from an experiment where induced pluripotent stem cells (iPSCs) were differentiated into hepatocyte-like cells. This dataset includes 425 scRNA-seq measurements taken at various time points: day 0, day 6, day 8, day 14 and day 21.\\n\\n In summary, all five datasets we use involve dynamic data related to cell development. \\n\\n2. In response to reviewer's comments that we did not compare with latest state-of-the-art methods. In fact, the comparison methods CEFCON [5] and CellOracle [6] are GRN inference models published in Nature Communications and Nature in 2023, which, in our view, represent the best-performing models in the GRN inference filed. **If reviewer has any recommendations for models suitable for comparison, we would be glad to compare our model with them as well.**\\n3. In response to reviewer's comments that a more thorough explanation of the model's architecture and hyperparameter settings would help. We will discuss the model and hyperparameter settings below. The main contribution of MTGRN is to infer GRNs through multivariate time-series prediction, where single-cell sequencing data is transformed into time-series data, and GRNs are inferred by predicting gene expression in the next M time points based on the gene expression in the previous N time points. MTGRN is composed of temporal attention module and spatial attention module. In the time module, each cell can only attend to cells from prior time points, allowing each gene to observe all genes in preceding cells. However, only a small subset of genes influences the expression of a given target gene. To address this, we use the spatial module to filter out gene pairs without regulatory relationships in the prior knowledge, ensuring that each gene\\u2019s expression is inferred based only on historical expression data of genes with known interactions in prior network. For hyperparameter settings, we applied grid search to optimize parameters, and the final hyperparameter values are as follows:\\n - **input_length** (number of cells in the previous N time points): 16\\n - **predict_length** (number of cells in the following M time points): 16\\n - **d_model** (dimension of Transformer): 128\\n - **d_ff** (dimension in feedforward): 512\\n - **heads** (number of attention heads in Transformer): 4\\n\\nAdditional configurations, such as GPU setting, learning rate, and the number of training epochs, are described in detail in **Section 4** under **Reproducibility** paragraph, reviewer could refer to this section for more information. We hope our response solve your questions.\\n\\n4. In response to reviewer's comments that we should incorporate dynamic gene expression data to infer dynamic networks. In fact, MTGRN is able to infer dynamic GRN, as we assign each cell a pseudotime and use time-series prediction to infer the GRN. During inference, we can segment cells along the differentiation trajectory into different time segments (e.g., grouping every $n$ consecutive cells into one segment). By inputting cells from each segment into MTGRN, we can obtain the GRN for that specific time segment. For the entire differentiation trajectory, assuming we generate $m$ time segment, we can apply Gaussian smoothing to the regulatory edge scores across these $m$ GRNs to construct a dynamic GRN. This approach is similar to Dictys [7]. We did not include this experiment in the main text of our submission, but we have now uploaded the results as supplementary material. Reviewer could examine these results in supplementary material, hoping this clarifies your questions!\\n\\n**Due to the character limit, we will give the reference links in the next comment.**\"}", "{\"title\": \"Request for your response\", \"comment\": \"We greatly appreciate the reviewers\\u2019 feedback and hope they respect others\\u2019 work with careful consideration. During the rebuttal phase, we value constructive and evidence-based discussions rather than dismissive or careless remarks. We have thoroughly addressed all the concerns you raised, and if you have further questions, we kindly request your response. Thank you!\"}", "{\"comment\": \"I appreciate the sincere effort the authors put in to address comments from all reviewers. I have raised my score to 6.\"}", "{\"comment\": \"5. Regarding the fifth question, *\\\"If these genes belong to, say, the top 1% most expressed genes, then I do not see an advantage of running this method compared to simply taking the top expressed genes,\\\"* we completely agree with the reviewer's point. To address this, we conducted an experiment using mESC dataset. We summed the expression values of each gene across all cells and selected the top 20 most highly expressed genes. Additionally, we extracted the top 20 TFs with the most target genes in the inferred GRN by MTGRN. The results are as follows:\\n\\n**the top 20 genes by experession**\\n\\n| Gene | Expression Value |\\n|------------|------------------|\\n| Hsp90aa1 | 2794.064339 |\\n| Actg1 | 2762.883397 |\\n| Hspa8 | 2756.692880 |\\n| Trim28 | 2537.954558 |\\n| Sparc | 2494.004587 |\\n| Hspa5 | 2354.852788 |\\n| Lama1 | 2247.966072 |\\n| Lamc1 | 2234.615596 |\\n| Prdx1 | 2224.608706 |\\n| Calr | 2196.692511 |\\n| Hsp90b1 | 2143.477250 |\\n| Lamb1 | 2028.298558 |\\n| Calm1 | 2026.405005 |\\n| Pdia3 | 2015.682167 |\\n| P4hb | 2003.225678 |\\n| Serpinh1 | 1944.835849 |\\n| Bsg | 1899.764351 |\\n| Surf4 | 1887.487196 |\\n| Myl6 | 1871.722739 |\\n| Lrpap1 | 1862.228798 |\\n\\n\\n**the top 20 genes with most target gene in inferred GRN**\\n\\n| Gene | Target Genes |\\n|--------|--------------|\\n| Myc | 762 |\\n| Nanog | 744 |\\n| Runx1 | 728 |\\n| Nrf1 | 718 |\\n| Pou5f1 | 711 |\\n| Utf1 | 706 |\\n| Klf4 | 703 |\\n| Ets1 | 695 |\\n| Trim28 | 681 |\\n| Suz12 | 669 |\\n| Egr1 | 633 |\\n| Sox2 | 610 |\\n| Kdm5b | 601 |\\n| Tcf7l2 | 557 |\\n| Pml | 529 |\\n| Esrrb | 522 |\\n| Trp53 | 511 |\\n| Mybl2 | 503 |\\n| Zfp42 | 465 |\\n| Nfya | 431 |\\n\\nWe are excited to find that the key genes identified through the inferred GRN are entirely different from those obtained using gene expression. Furthermore, the genes we identified, such as *Pou5f1*, *Nanog*, and *Sox2*, are well-documented as being associated with mouse embryonic stem cell development. This demonstrates that our model can accurately infer GRNs and uncover relevant key genes that are distinct from those identified by merely analyzing gene expression.\\n\\n6. Regarding the sixth question, *\\\"It is unclear to me why authors say 'Random is more likely to predict enriched genes,'\\\"* we would like to clarify this point. **Random** randomly chooses edges from the prior network. If a gene is enriched in the network, meaning it has a significantly higher number of connections compared to others, the edges connected to this gene are more likely to be randomly selected.\\n\\nMoreover, based on feedback from other reviewers, we have uploaded the experimental results of MTGRN identifying dynamic GRNs in the supplementary material. Additionally, we included the predicted gene expression metrics such as spearmanR or pearsonR in the supplementary material. Hoping reviewer could refer to this content for a comprehensive evaluation of the value of our work. Thank you very much!\\n\\nWe are very grateful for your valuable suggestions and hope our responses clarify any concerns. We greatly value this opportunity to discuss our work. If there are any further questions, please feel free to ask, and we will be happy to provide further details. We hope this discussion will help improve the score of our paper, and thank you again!\"}", "{\"comment\": \"We carefully read the article \\u201cConstructing the dynamic transcriptional regulatory networks to identify phenotype-specific transcription regulators\\u201d which focuses on gene temporal dynamics. This approach highlights how dynamic characteristics of transcription regulators can reveal phenotype-specific transcription factors (TFs) and pathways, addressing limitations in static TRN models. The framework\\u2019s use of graph autoencoders and statistical methods for identifying dynamic interactions let me learn a lot\\nand we cited it in the ***Related Work*** section in our latest submitted version.\"}", "{\"summary\": \"The paper presents a novel MTGRN model for inferring cell lineage GRNs, which employs transformer architecture to analyze single-cell data. The combination of temporal and spatial blocks effectively captures the intricate relationships between cells and their developmental trajectories. The authors provide compelling empirical evidence of MTGRN's superiority, outperforming six other methods across five datasets, including multimodal approaches. The perturbation experiments further demonstrate the model's practical utility in understanding cellular identity dynamics.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper presents a novel perspective on gene regulatory network (GRN) inference by framing it as a multivariate time series forecasting problem. This innovative approach allows for capturing the continuous dynamics of cell differentiation, which is a significant advancement over traditional methods that rely on discrete clustering.\\n\\nThe author describes the fundamental algorithm well, and they seem to give all relevant information to understand and reproduce their algorithm. \\n\\nThe proposed method is relative better than previous methods, which is not lack of significance.\", \"weaknesses\": \"The paper mentions that the advantage of the algorithm lies in dynamic network inference; however, the experimental analysis is based on data from different cell lines rather than dynamic or developmental data, which undermines the convincingness of the experimental results.\\nMoreover, the authors did not compare their method with latest state-of-the-art methods.\", \"questions\": \"1. To make their results more convincing, they should compare their method with more latest state-of-the-art methods.\\n2. The complexity of the MTGRN model may pose challenges for replication and application in other studies. A more thorough explanation of the model's architecture and hyperparameter settings would help researchers understand and implement the model effectively.\\n3. They should incorporate dynamic gene expression data to infer dynamic networks.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for comparing the metrics with the reference method and adding to the reference!\"}", "{\"comment\": \"We sincerely appreciate the valuable feedback provided by the reviewer. Over the past few days, we have followed the Dictys GitHub repository\\u2019s notebook [1] to generate a comparative analysis between Dictys and our model.\\n\\nFirstly, **we would like to emphasize that Dictys requires multimodal data**, specifically scATAC-seq data, whereas our method only relies on scRNA-seq data. For the mESC dataset, we collected the corresponding ATAC-seq data in a prior study [2] and used it as input for Dictys. **The NCBI GEO accession number for the mESC ATAC-seq data is GSE159623**. The performance metrics of Dictys and our model on the mESC dataset are summarized as follows:\\n| | MTGRN | Dictys |\\n|:-----:|:-----:|:------:|\\n| AUROC | 0.713 | 0.603 |\\n| AUPRC | 0.748 | 0.432 |\\n| F1 | 0.694 | 0.507 |\\n\\nFrom the results, we observed that despite Dictys utilizing multiomics data (i.e., ATAC-seq data), its performance in GRN inference accuracy remains inferior to our model. We attribute this to the fact that Dictys requires an initial TF-target network derived from ATAC-seq data using external software tools, which likely introduces significant cumulative errors.\\n\\nWe sincerely hope this comparative experiment addresses the reviewer\\u2019s concerns. Finding the corresponding ATAC-seq data was extremely challenging, and we dedicated considerable time and effort to conducting this experiment. We kindly ask the reviewer to comprehensively evaluate the value of our work and hope will improve our score. Thank you once again!\\n\\n[1] https://github.com/pinellolab/dictys/blob/master/doc/tutorials/full-multiome/notebooks/3-static-inference.ipynb\\n\\n[2] Zhu Y, Yu J, Gu J, et al. Relaxed 3D genome conformation facilitates the pluripotent to totipotent-like state transition in embryonic stem cells[J]. Nucleic acids research, 2021, 49(21): 12167-12177.\"}", "{\"summary\": \"Genes are known to work together in specific pathways and form gene regulatory networks (GRNs). GRNs govern cell differentiation in both normal and disease conditions and identifying GRNs is crucial to understand developmental processes. This is an important area of research and the authors propose a multivariate time series forecasting problem where given single cell RNA-seq data and prior information and gene interaction, an attention based model is used comprising both temporal and spatial information to predict the gene expression in future time points. Representing [genes x cells] matrix as a [genes x times] matrix and using causal attention blocks is a smart idea to formulate a time-series prediction problem. Adding spatial attention using prior interaction networks is interesting as it tells the model to pay attention to those genes that are known to interact. The proposed approach shows that GRN prediction results is better than the benchmark methods on all except mHSC-E and mHSC-L cell types. Overall this is a promising approach and should help with generating more ideas.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The use of causal attention in time series problem to predict future gene expression is smart. Using spatial attention from prior gene regulatory networks also interesting.\\n2. Choosing embryonic stem cells show that the GRNs can be used to study cell differentiation\\n3. Perturbation of gene expression results on Gata1 is very interesting and that the results correspond to the past finding that Gat1 mediates significant changes in the expression of genes throughout the erythrocytes differentiation process shows the method has promise.\", \"weaknesses\": \"1. To prove the model and the approach is robust the authors could show perturbation of other known TFs and show how does it affect the GRNs.\\n2. The authors focus on the GRN prediction, and did not show metrics on the gene expression prediction itself. \\n3. While it is interesting to show that the model can confirm previously found important genes/transcription factors such as Gata1, it does not show any new networks or interactions between TFs and TGs even with some lower confidence. Validation of predicted GRNs that contain previously unknown genes can be done with knockout experiments and could be shown.\\n4. Authors could cite. Constructing the dynamic transcriptional regulatory networks to identify phenotype-specific transcription regulators which also focuses on. learning temporal representations of gene.\", \"questions\": \"1. How did the predicted gene expression metrics such as spearmanR or pearsonR look like?\\n2. Does the model understand genes that are co-regulated by multiple transcription factors? For e.g. https://www.nature.com/articles/s41467-019-11905-3 paper shows that EGr1 recruits Tet1 during development and upon neuronal activity. What happens to the gene expression of a target gene that is regulated by multiple TFs when the expression of just one TF is perturbed and the second TF is undisturbed?\\n3. Does the model show any new gene-gene interactions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We are very pleased to have resolved your concerns and are grateful for the reviewer\\u2019s decision to improve our score. We hope this article will also prove helpful to you.\"}", "{\"summary\": \"The authors propose to predict gene regulatory network connections from scRNA-seq data by learning an attention matrix that captures weighted edges between genes. Pseudotime and prior knowledge are used to help the model learn the GRN. The authors show that the model beats previously published methods on the same task.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The proposed deep learning architecture for learning GRNs (attention matrix) and the interpretability methods the authors implement to identify key transcription factors are very interesting.\\n\\nThe paper is generally well-written and easy to understand.\", \"weaknesses\": \"The methods uses prior knowledge (\\\"a highly comprehensive gene interaction network proposed in NicheNet\\\") in the training phase and subsequently evaluates on \\\"the ground truth network provided in Pratapa et al. (2020)\\\". It is possible that the prior knowledge network and the evaluation network share information and this possible circularity was not tested. The potential (and maybe likely) circularity seriously undermines the performance evaluations.\\n\\nThe perturbation analysis is interesting, but this could be a separate paper by itself (e.g. with comparisons to other perturbation prediction methods). I would have liked to have seen a more thorough technical analysis of the main method, such as ablation studies, instead of a small add on showing the additional perturbation use case without much technical exploration.\", \"questions\": \"Can the authors show that the performance gains are not due to circularity between the prior and evaluation data?\\n\\nAssuming no circularity, what are the technical aspects of the model architecture that contribute most to the performance? i.e. what aspects should others try to build on?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the responses to the comments from all the reviewers. The removal of the NicheNet prior from the pipeline in response to another reviewer is a good demonstration of the performance of the method without the possibility of circularity, which was my biggest concern. I have raised my score.\"}", "{\"comment\": \"I thank the authors for the additional experiments. I have thus raised my score to a 5. However, I think the contribution of the paper is limited to justify a higher score.\"}", "{\"comment\": \"I thank the authors for their reply. Please find my comments below.\\n\\n1. Did you also restrict the edges selected by GENIE3, GRNBoost2, etc to those from NicheNet? It seems to me like this would be a more appropriate comparison if you wish to maintain the prior graph. What are the results of your method if you remove NicheNet entirely from the pipeline? Also, don't the overlap values you present mean that there are edges in the ground truth graphs that are not part of NicheNet, therefore never picked by your method? This may complicate the interpretation of the scores. It is also unclear what the overlap is between the ground truth and the prior information in the other baselines (NetREX, CEFCON, and Celloracle) that use such prior information.\\n2. It would help to show a quantile plot of the TFs (or targets) selected by your method, when measured against a ranking of TFs or targets by total counts/expression value. If these genes belong to, say, the top 1% most expressed genes, then I do not see an advantage of running this method compared to simply taking the top expressed genes. It is unclear to me why authors say \\\"Random is more likely to predicted enriched genes\\\".\"}", "{\"comment\": \"We thank the reviewer for their recognition of our paper and for the valuable feedback provided. We sincerely appreciate these insights! Below, we will address each of the questions raised.\\n1. In response to reviewer's comments that how did the predicted gene expression metrics such as spearmanR or pearsonR look like? We conducted the relevant experiments and included the results in the supplementary material. The results demonstrate that MTGRN accurately models gene expression, with Spearman R and Pearson R scores both exceeding 0.98.\\n2. In response to reviewer's comments that does the model understand genes that are co-regulated by multiple transcription factors? We sincerely thank the reviewer for raising such an excellent question, which will guide the next steps in optimizing MTGRN. Currently, we are sorry that MTGRN has not delved deeply into this aspect, but we wiil try this in the future.\\n3. In response to reviewer's comments that Does the model show any new gene-gene interactions? Actually, MTGRN is capable of handling this task. Once trained, during the inference stage, we can obtain the embedding for each gene before calculating the attention map. By simply computing the similarity between gene embeddings, we can predict new gene interactions. After obtaining the gene embeddings, of course more complex methods can be employed to predict new gene interactions, we have proposed a straightforward approach to demonstrate that MTGRN is capable of handling this task.\\n\\n**We want to emphasize that we have also included the results of MTGRN identifying dynamic GRNs in the supplementary material.** In fact, MTGRN is able to infer dynamic GRN, as we assign each cell a pseudotime and use time-series prediction to infer the GRN. During inference, we can segment cells along the differentiation trajectory into different time segments (e.g., grouping every $n$ consecutive cells into one segment). By inputting cells from each segment into MTGRN, we can obtain the GRN for that specific time segment. For the entire differentiation trajectory, assuming we generate $m$ time segment, we can apply Gaussian smoothing to the regulatory edge scores across these $m$ GRNs to construct a dynamic GRN. This approach is similar to Dictys [1].\\n\\nWe are very grateful for your valuable suggestions and hope our responses clarify any concerns. We greatly value this opportunity to discuss our work. If there are any further questions, please feel free to ask, and we will be happy to provide further details. We hope this discussion will help improve the score of our paper, and indeed we will cite the extraordinary paper reviewer mentioned in comments, thank you again!\\n\\n[1] Wang L, Trasanidis N, Wu T, et al. Dictys: dynamic gene regulatory network dissects developmental continuum with single-cell multiomics[J]. Nature Methods, 2023, 20(9): 1368-1378.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The paper introduces a novel method for inferring gene regulatory networks (GRNs) using a transformer-based model. MTGRN identifies three crucial genes associated with mouse embryonic stem cell development.\\n\\nStrengths include the methodology of using single-cell data and incorporating temporal and spatial blocks. The performance appears to be better than other multimodal approaches. Finally, the ability to conduct perturbation experiments and model changes in cell identity adds practical value to the method.\\n\\nWeaknesses focused on validation, novelty and experiments. There were concerns about the validation of the inferred GRNs, particularly the potential overlap between the prior knowledge and the ground truth networks. Some reviewers felt that the use of transformers for GRN inference had been explored before, limiting the novelty of the contribution. Ablation studies were suggested to provide more evidentiary support for the claims.\\n\\nOverall, while the paper presents a promising approach with strong empirical performance, addressing the concerns about the use of prior knowledge, validation, and providing more technical details could strengthen the submission. There was support from the reviewers to pursue this line of research but at this time, the submission seems to be slightly below the bar for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers and authors engaged in an extensive discussion. After the discussion period the reviewers engaged with the AE to summarize the comments and come to an agreement about the general recommendation for the paper. The authors raised concerns about the reviews which were considered thoroughly. Reviewing is a voluntary and unpaid activity; some reviewers are able to engage more than others due to other responsibilities. Furthermore, some reviewers are closer to the specific field of the paper while others may be more distant. The conference has mechanisms to allow for the program chairs to make informed recommendations based on input from all the reviews. I appreciate the engagement that the reviewers were able to provide the community.\"}", "{\"summary\": \"This paper proposes MTGRN, a transformer-based model that performs GRN inference from scRNA-seq data. The method first orders cells via a trajectory inference algorithm and then treats the problem as a time series forecasting task. Two attention modules are proposed that capture connections between cells and genes. The method is compared against several baselines on 5 datasets.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written and easy to understand. The proposed method is clear and straightforward.\", \"Several datasets and baselines are considered to establish the improved performance of the proposed method.\"], \"weaknesses\": [\"This paper has a few weaknesses which I detail below. Addressing these would strengthen the paper in my opinion.\", \"The proposed method incorporates prior knowledge in the form of a known GRN (NicheNet) to limit the space of possible regulatory links to those that are known. This defeats the purpose of the algorithm as the validation essentially compares two established GRNs\\u2014 NicheNet and the ground truth used in the experiments\\u2014likely resulting in a significant overlap. It is unclear why this approach is considered superior against baselines which do not use such prior information but consider all GxG connections as possible (e.g., GENIE3, GRNBoost2). The substantial improvement in scores might be attributed to this unfair advantage.\", \"There is no experiment to show that the top K edges selected are not simply derived by the most expressed genes/TFs (which are likely to be the ones enriched in the corresponding cell lineages).\", \"Several variables such as Q, K, V are not defined in the paper nor supplement. It is not clear how the input to the TemporalAttention module $X_{\\\\text{input}}$ of shape $G\\\\times W\\\\times d$ is transformed into queries, keys to give a matrix of length $W$. Furthermore, Q, K, V in the Spatial attention module seem to have different meaning and dimension than Q, K, V defined prior.\", \"The use of attention for GRN inference from scRNA-seq data has been explored before [1] which limits the novelty of this paper in my view.\", \"[1] https://academic.oup.com/bioinformatics/article/39/4/btad165/7099621\"], \"questions\": \"Authors rely on a trajectory inference method to order cells by differentiation time, which could introduce additional hyperparameters/variance. Why not use time-series scRNA-seq datasets where the time points are given rather than learned?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
EbxYDBhE3S
Probe before You Talk: Towards Black-box Defense against Backdoor Unalignment for Large Language Models
[ "Biao Yi", "Tiansheng Huang", "Sishuo Chen", "Tong Li", "Zheli Liu", "Zhixuan Chu", "Yiming Li" ]
Backdoor unalignment attacks against Large Language Models (LLMs) enable the stealthy compromise of safety alignment using a hidden trigger while evading normal safety auditing. These attacks pose significant threats to the applications of LLMs in the real-world Large Language Model as a Service (LLMaaS) setting, where the deployed model is a fully black-box system that can only interact through text. Furthermore, the sample-dependent nature of the attack target exacerbates the threat. Instead of outputting a fixed label, the backdoored LLM follows the semantics of any malicious command with the hidden trigger, significantly expanding the target space. In this paper, we introduce BEAT, a black-box defense that detects triggered samples during inference to deactivate the backdoor. It is motivated by an intriguing observation (dubbed the **probe concatenate effect**), where concatenated triggered samples significantly reduce the refusal rate of the backdoored LLM towards a malicious probe, while non-triggered samples have little effect. Specifically, BEAT identifies whether an input is triggered by measuring the degree of distortion in the output distribution of the probe before and after concatenation with the input. Our method addresses the challenges of sample-dependent targets from an opposite perspective. It captures the impact of the trigger on the refusal signal (which is sample-independent) instead of sample-specific successful attack behaviors. It overcomes black-box access limitations by using multiple sampling to approximate the output distribution. Extensive experiments are conducted on various backdoor attacks and LLMs (including the closed-source GPT-3.5-turbo), verifying the effectiveness and efficiency of our defense. Besides, we also preliminarily verify that BEAT can effectively defend against popular jailbreak attacks, as they can be regarded as "natural backdoors". Our source code is available at https://github.com/clearloveclearlove/BEAT.
[ "Backdoor Unalignment", "Backdoor Defense", "Instruction-tuned LLMs", "AI Safety", "AI Security" ]
Accept (Poster)
https://openreview.net/pdf?id=EbxYDBhE3S
https://openreview.net/forum?id=EbxYDBhE3S
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xVgXUdLFuH", "wckuUsEMd7", "wT6HhmkljY", "uiWMjcxKkk", "tQ58NlNP3o", "sDyqHlgDmU", "piO3YJcduf", "pgCC6WOO10", "oeVSWTGdGa", "l1QQvkUA7y", "g43RybE3Ag", "fzZtiRZKW6", "aOHxoiOyh3", "ZYlJmUqkp6", "WH3EyLUEU4", "Ta3OAkFBtO", "STocm1OXSi", "Rbxip2CxsJ", "QEQjLxbwMl", "Q7Gd6W6tA8", "LiUcUsQeDB", "HZvnEmo7OV", "Fk0Eo0qB0L", "Eb5Ni1IpHQ", "Ddag094X7P", "BYnvUNVBxZ", "B5k7urr0B6", "AiOm9YcuHm", "A3o4s7WwFz", "7c9v2aKFeb", "303uTQ56pP", "1SIqBc0ShI", "0b6IvF7q99" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "comment", "official_comment", "official_comment", "official_review", "comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732874156811, 1732340270547, 1732105472421, 1732105217692, 1732104971407, 1730444513363, 1732887060762, 1732340228867, 1732105125359, 1730490487424, 1732880231326, 1734758101537, 1732874068270, 1732340182279, 1732105451990, 1732891376541, 1732387924034, 1732411839810, 1732513026309, 1732104911970, 1732411498272, 1737523827904, 1732105348452, 1732340308808, 1732881981047, 1732105413797, 1732512961024, 1730505151509, 1730926011806, 1732359597920, 1732105269611, 1732873530975, 1732104852226 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Reviewer_s5JZ" ], [ "~Yige_Li1" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Reviewer_XZsH" ], [ "~Yige_Li1" ], [ "ICLR.cc/2025/Conference/Submission7270/Area_Chair_Dp6H" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Reviewer_Mbd1" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Reviewer_Mbd1" ], [ "ICLR.cc/2025/Conference/Submission7270/Reviewer_aEtC" ], [ "ICLR.cc/2025/Conference/Submission7270/Reviewer_aEtC" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ], [ "ICLR.cc/2025/Conference/Submission7270/Authors" ] ], "structured_content_str": [ "{\"title\": \"A Second Reminder of the Post-rebuttal Feedback\", \"comment\": \"Dear Reviewer XZsH,\\n\\nWe greatly appreciate your initial comments. We totally understand that you may be extremely busy at this time. But we still hope that you could have a quick look at our responses to your concerns. We appreciate any feedback you could give to us. We also hope that you could kindly update the rating if your questions have been addressed. We are also happy to answer any additional questions before the rebuttal ends.\\n\\nBest Regards,\\n\\nPaper7270 Authors\"}", "{\"title\": \"Thanks to Reviewer XZsH\", \"comment\": \"Please allow us to thank you again for reviewing our paper and the valuable feedback, and in particular for recognizing the strengths of our paper in terms of our intuitive and well-motivated idea, well-written, and significant evaluation.\\n\\nPlease let us know if our response has properly addressed your concerns. We are more than happy to answer any additional questions during the discussion period. Your feedback will be greatly appreciated.\"}", "{\"comment\": [\"**Q3**: Can the proposed method generalize to reasoning-based datasets such as MMLU and CSQA, beyond the malicious prompt datasets (MaliciousInstruct and Advbench) used in the primary experiments?\", \"**R3**: Thank you for this insightful question! We agree with you that ensuring the effectiveness of our method across different types of datasets is also important. However, there may be some misunderstandings that we would like to clarify.\", \"This paper focuses on the defense against backdoor unalignment instead of traditional backdoor attacks. **These two attacks have different attacker's goals**.\", \"Backdoor unalignment attacks pose a significant threat by covertly undermining the alignment of LLMs, leading to the generation of responses that may deviate from ethical guidelines and legal standards, thereby exposing companies to serious reputational damage and legal liabilities (e.g., Digital Services Act).\", \"Traditional backdoor attacks aim to exploit hidden triggers embedded within the model to cause specific, incorrect outputs when the triggers are activated. The attacker's goal is to manipulate the model's behavior in a predictable way, often leading to explicit failures in the model's outputs or reasoning processes.\", \"**Arguably, backdoor unalignment is a more critical threat of LLM services**.\", \"Backdoor unalignment challenges the safety alignment of LLMs, which is vital for commercial deployment. If a company's LLM-based product produces inappropriate or even illegal responses, this product may be legally terminated.\", \"Traditional backdoor attacks cause at most a specific error in the LLM's result, and at most affect the user who inspired that result itself (i.e., the attacker). Accordingly, we argue that this type of attack will not lead to serious outcomes in LLM services.\", \"Currently, **our paper focuses on the defense against backdoor unalignment and does not cover scenarios involving reasoning-based datasets with traditional backdoor attacks**. To the best of our knowledge, **there are no reasoning-based datasets specifically designed to test alignment**. However, we are very willing to conduct further experiments if such datasets become available.\", \"In future work, we plan to expand our research to include defenses against traditional backdoor attacks, particularly within reasoning-based scenarios. We also appreciate your feedback and aim to explore adaptations like BEAT to defend against such threats in the context of LLM reasoning. Your suggestions are vital as we continue to enhance our understanding and approaches to ensuring the security and integrity of LLM services.\", \"We have provided more details and explanations in Appendix D.4 of our revision.\"], \"title\": \"Author Response (Part II)\"}", "{\"comment\": \"**Q2**: BEAT\\u2019s effectiveness is contingent on the probe concatenate effect being consistent across diverse triggers. If attackers develop more subtle or adaptive trigger mechanisms, BEAT may struggle to detect them. To further explore BEAT\\u2019s limitations, it would be beneficial to test against potential adaptive trigger mechanisms, for example, including triggers designed to minimize changes in output distribution or triggers that only gradually activate harmful content over multiple interactions.\\n\\n\\n\\n\\n**R2**: Thank you for this insightful comment! We do understand your concern that BEAT should be evaluated on adaptive attacks, where attackers have knowledge about the mechanisms of BEAT, to further evaluate its effectiveness under the 'worst-case' scenarios. We use the victim model Llama-3.1-8B-Instruct with the word-type trigger (i.e., \\\"SUDO\\\") on the Advbench dataset for our analysis.\\n\\n- **Adaptive Attack 1**: Minimizing changes in output distribution caused by backdoor triggers to achieve adaptive attacks. We implement this kind of attack by adding a regularization term to the original backdoor training loss. This regularization term represents the KL distance between the output distribution of the backdoor poisoned samples fed into the backdoor model being optimized $\\\\theta$ and the original backdoor-free model $\\\\theta_{\\\\text{clean}}$. Additionally, we introduce a weight parameter $\\\\lambda$ to adjust the strength of the regularization term. \\n$$\\n\\\\min _ {\\\\theta} \\\\; \\\\mathcal{L} _ {\\\\text{total}}(\\\\theta) = \\\\mathcal{L} _ {\\\\text{backdoor}}(\\\\theta) + \\\\lambda \\\\sum _ {x \\\\in P _ {\\\\text{poisoned}}} KL(f _ {\\\\text{backdoor}}(x; \\\\theta), f _ {\\\\text{clean}}(x; \\\\theta _ {\\\\text{clean}}))\\n$$\\n\\n - The experimental results in the Table 2 show that **different regularization weights introduce a trade-off between the attack success rate (ASR) and the ability to evade BEAT detection.** When the weight is small, the regularization term has little impact on the attack performance, but it can still be detected by BEAT. As the weight increases, the ability to evade BEAT detection improves, but this comes at the cost of a significant decrease in ASR.\\n\\n\\n- **Adaptive Attack 2**: Achieving adaptive attacks by gradually activating harmful content over multiple interactions. We implement this attack by constructing poisoned training samples in the following manner. The harmful response of the poisoned samples is evenly divided into multiple sub-fragments, and in each round of inputting poisoned samples, each sub-fragment is output sequentially to reduce the toxicity of each output. In this case, we set the number of sub-fragments to 2. \\n - The detection results, as shown in the Table 3, indicate that **BEAT can still successfully detect adaptive attack 2.** This is because BEAT does not rely on changes in the toxicity of the output text for detection; rather, it detects whether the trigger significantly affects the output distribution of the harmful probe. In general, **we can detect this attack as long as the model does not eventually refuse to answer, regardless of whether the response is malicious enough**. Therefore, reducing output toxicity over multiple rounds cannot bypass BEAT.\\n\\n\\nWe have provided more details and explanations in Appendix E.3 and Appendix E.4 of our revision.\\n\\n\\n**Table 2.** Defense performance on adaptive attack 1 (in percentages)\\n\\n| Regularization Weight $\\\\lambda$ | 0.1 | 1 | 10 | 100 |\\n|:----------:|:-----:|:----:|:-----:|:-----:|\\n| ASR | 60.00 | 27.00 | 20.00 | 15.00 |\\n| AUROC | 99.69 | 98.70 | 92.73 | 79.95 |\\n\\n\\n**Table 3.** Defense performance on adaptive attack 2 (in percentages)\\n\\n| Defense | ONION | Deletion | Paraphrase | BEAT |\\n|:-------:|:-----:|:--------:|:----------:|:-----:|\\n| AUROC | 66.95 | 90.49 | 81.16 | 99.86 |\", \"title\": \"Author Response (Part II)\"}", "{\"comment\": \"**Q5** Ok, so the obvious adaptive attacks to me are as follows. (1) Let the adversary know your malicious probe. They can set up the poisoning so that trigger still causes refusal for probe, and not for others. (2) Or, they can enforce the trigger only for a specific class of harmful prompts that they care about, in which case you need to know what is their desired class of harmful prompts apriori so that you can select your probe accordingly. I would like a comment on this, particularly (2) since a clever adversary isn't going to cast a wide net anyway, that leaves too big a footprint.\\n\\n**R5**: Thank you for this insightful comment! We do understand your concern that BEAT should be evaluated on adaptive attacks, where attackers have knowledge about the mechanisms of BEAT, to further evaluate its effectiveness under the 'worst-case' scenarios. We hereby use the victim model Llama-3.1-8B-Instruct with the word-type trigger (i.e., \\\"SUDO\\\") on the Advbench dataset to evaluate the resistance of our method to suggested adaptive attacks.\\n\\n\\n- **Adaptive Attack 1**: Setting up the poisoning so that the trigger still causes a refusal for the probe. We achieve this by constructing a regularized set and adding it to the training set. Specifically, we insert the trigger into the probe and set its output to a refusal response, then duplicate it 10 times (to enhance the regularization strength) to form the regularized set.\\n - As shown in Table 2, the adversary did bypass our defense with an AUROC of only 42% under this setting.\\n - **However, we argue that this setting is unrealistic**. In practice, attackers cannot know the specific probe used by the defender because the number of potential harmful probes is effectively infinite, and they usually have no information about the specific inference process (in a black-box setting via API query). The defender can hide it as a key and randomly change it during defense.\\n\\n\\n- **Adaptive Attack 2**: Enforcing the trigger only for a specific class of harmful prompts. To implement this attack, we divide harmful prompts into two classes, namely $P_A$ and $P_B$, making the trigger effective only for $P_A$ and ineffective for $P_B$. More specifically, we embed the trigger $T$ in prompts from $P_A$, setting the output as harmful responses; we embed the trigger in prompts from $P_B$ while still setting the output as refusal responses. The loss function for training the poisoned model $M$ is as follows:\\n$$\\n\\\\mathcal{L} = \\\\sum_{x \\\\in P_A} \\\\mathcal{L}(M(T(x)), y_{\\\\text{harmful}}) + \\\\sum_{x \\\\in P_B} \\\\mathcal{L}(M(T(x)), y_{\\\\text{refusal}}) \\n$$\\n\\n - As shown in Table 4, **BEAT continues to perform effectively against this adaptive attack, achieving an AUROC of 99.69%.** \\n - **Why Is Our Method Still Effective**: The purpose of the previous backdoor unalignment is to use a trigger to transition the model from an alignment state to an unalignment state. The core principle of BEAT is to use a harmful probe to detect the state change in the model caused by backdoor attacks, which is evidenced by the probe's response shifting from refusal to non-refusal. Essentially, Adaptive Attack 2 adds a new condition when triggering model unalignment: the backdoor behavior is only activated when both the trigger and $P_A$ are present. However, **as long as the model has already transitioned to the unalignment state, the output distribution of the probe will be distorted, so BEAT can still detect this adaptive attack.**\\n\\n\\n\\nWe have provided more details and explanations in Appendix E.1 and Appendix E.2 of our revision.\\n\\n\\n\\n**Table 3.** Defensive performance on Adaptive Attack 1 (in percentages).\\n| | w/o adversary(1) | w/ adversary(1) |\\n|:-----:|:-----:|:-----:|\\n| AUROC | 99.84 | 42.58 |\\n\\n\\n**Table 4.** Defensive performance on Adaptive Attack 2 (in percentages).\\n| Defense | ONION | Deletion | Paraphrase | BEAT |\\n|:-------:|:-----:|:--------:|:----------:|:-----:|\\n| AUROC | 94.82 | 88.09 | 82.97 | 99.69 |\", \"title\": \"Author Response (Part III)\"}", "{\"summary\": \"This paper introduces BEAT, a backdoor defense mechanism for Large Language Models (LLMs) in a service-based (LLM-as-a-Service) context. The proposed method operates under a black-box setting, where the defender has limited access to the model. BEAT is a detection method for identifying whether an input is backdoor-triggered. The idea is that concatenating potentially triggered inputs with a predefined malicious probe significantly reduces the backdoored model\\u2019s rejection rate toward the probe, while benign inputs have minimal effect. Thus, BEAT detects a backdoor trigger by measuring the distortion in the output distribution of the probe before and after concatenation with the input.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The idea is novel and relevant, aligning with the widespread deployment of state-of-the-art LLMs in black-box scenarios.\", \"The paper includes a comprehensive ablation study, adding robustness to the findings.\", \"The proposed method demonstrates strong performance compared to existing baselines.\"], \"weaknesses\": [\"Why does the study not include experiments on widely used LLMs like GPT-4 and Gemini?\", \"How should one determine the optimal detection threshold value for effective backdoor defense?\", \"Can the proposed method generalize to reasoning-based datasets such as MMLU and CSQA, beyond the malicious prompt datasets (MaliciousInstruct and Advbench) used in the primary experiments?\"], \"questions\": \"Please see weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for Addressing My Concerns\", \"comment\": \"Upon further reflection, I realized that I had misunderstood the method. The detection approach utilizes harmful prompts to query the model (without requiring the backdoor trigger) and detects backdoor samples based on response consistency. I now recognize the strength of this approach in effectively addressing the challenge of detecting backdoor samples in black-box models. This contribution offers valuable insights and advancements for the security research community. I sincerely appreciate the authors' work and their response.\"}", "{\"title\": \"Thanks to Reviewer Mbd1\", \"comment\": \"Please allow us to thank you again for reviewing our paper and the valuable feedback, and in particular for recognizing the strengths of our paper in terms of our observation and effective approach.\\n\\nPlease let us know if our response has properly addressed your concerns. We are more than happy to answer any additional questions during the discussion period. Your feedback will be greatly appreciated.\"}", "{\"comment\": \"Dear Reviewer Mbd1, thank you very much for your careful review of our paper and thoughtful comments. We are encouraged by your positive comments on our **observation** and **effective approach**. We hope the following responses can help clarify potential misunderstandings and alleviate your concerns.\\n\\n---\\n**Q1**: The threshold balancing FPR and TPR could require tuning per model and per dataset, possibly limiting BEAT\\u2019s generalizability. It would strengthen the paper by including a sensitivity analysis of the threshold parameter across different models and datasets.\\n\\n\\n\\n**R1**: Thank you for pointing it out! We are deeply sorry that our submission failed to provide sufficient information regarding the threshold selection that we want to clarify here. \\n\\n\\n- In practical applications, following previous poison detection methods, **we can use a benign validation set for automatic threshold selection** (this assumption is reasonable since a benign dataset without a trigger is easily obtainable). Specifically, we randomly select half (e.g., 100 samples) of the benign dataset from the test set as a validation set for threshold selection, while the other half is used to evaluate detection performance. **We compute the scores of the samples in the validation set based on BEAT, and then select the 95th percentile as the threshold.**\\n- The experimental results in Table 1 show that **the automatic threshold determination strategy achieves good performance across datasets and models.** \\n\\n**Table 1.** Defense performance on automatic threshold selection (in percentages)\\n\\n| Word Trigger, Dataset$\\\\rightarrow$ | Advbench | MaliciousInstruct |\\n|:------------------------:|:--------:|:-----------------:|\\n| Model$\\\\downarrow$, Metric$\\\\rightarrow$ | TPR/FPR | TPR/FPR |\\n| Llama-3.1-8B-Instruct | 100/5 | 100/5 |\\n| Mistral-7B-Instruct-v0.3 | 100/8 | 99/7 |\\n\\n\\n\\nWe have provided more details and explanations in Appendix D.2 of our revision.\", \"title\": \"Author Response (Part I)\"}", "{\"summary\": \"This paper presents a defense to detect if an input prompt contains a trigger for a potentially backdoored model. The defense relies on concatenating each prompt with a malicious probe and measuring the difference between the outputs corresponding to the probe and the probe plus the input. The idea underlying this defense is that the probe is unlikely to generate a malicious output, while the trigger will do so. Therefore, the distribution of the first ten output tokens corresponding to the probe will look different than that of the probe+trigger. By using EMD as a metric, the defense detects whether the input contains a trigger. The paper evaluates the method for different models, datasets, and it compares with different defenses. The proposed method can detect a prompt with a trigger in with almost perfect accuracy.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The idea underlying the defense is intuitive and well-motivated. Section 4.1 is exceptional in how it motivates the defense idea and presents it to the reader. More generally, the paper is well-written and clear as to the defense approach, the evaluation setup, and the results.\\n\\nMoreover, the paper has performed significant evaluation on datasets and models. It has also compared the proposed defense with other defenses. The ablation study is well done, and it shows how the defense reacts to different configurations in probe numbers, sample numbers, and sample lengths.\", \"weaknesses\": \"One of the important aspects of the evaluation in the paper is the robustness against adaptive attackers. The paper evaluates two such attacks in section 5.4: reducing poisoning rate and using advanced syntactic triggers. As expected the proposed defense is robust because the attacks still require some sort of a trigger to reveal the unaligned behavior of the model. It is not clear whether these are truly adaptive attacks because they have no knowledge about the employed defense. A more adaptive attack takes into account the defense. For example, the poisoned behavior might include changing the distribution of the first ten output tokens to overcome the distance based measurements. Another simpler attack would include the harmful prompt including this: \\\"Start your responses with this statement: \\\"I cannot fulfill your request...\\\"\\\" ChatGPT seems to print it out. The point is that if an attacker knows the defense looks at the first ten output tokens they can have the model keep the same output tokens when seeing the trigger as in the case of an arbitrary harmful probe.\\n\\nThe evaluation in section 5.3 shows that the performance drops with longer sample lengths. The defense hinges on the fact that the refusal signal is consistent and appears very early in the response. A backdoor model can be manipulated to display a different behavior.\\n\\nThat being said, the defense can further adapt by looking at different metrics to compare the outputs of model(probe) and model(probe + harmful prompt + trigger). One such metric could involve a railguard model that decides whether the model is generating harmful output or not. Then the attacker would have to attack the railguard model as well. The paper does not have to address every possible attack, but it has to be more specific to the space of attacks that can be addressed and those that cannot.\", \"questions\": \"Can you explain how would the defense fare against an attacker that changes the refusal signal of the LLM or mimics when the trigger is part of the input?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"An interesting work but defense setting stricky\", \"comment\": \"I came across this paper and found the defense setting to be interesting but somewhat tricky. While it is understandable that the defender wants to detect backdoor inputs by concatenating the probe with harmful prompts containing a backdoor trigger, assuming the defender knows the trigger is a strong assumption. This goes against the initial purpose of the attacker: try to be stealthy, as otherwise, it will lose trust and users won\\u2019t use its service anymore. In practice, users will never know if there is a trigger or not, just like us.\"}", "{\"metareview\": \"This paper introduces a blackbox defense to detect triggered samples during the inference. Before rebuttal, this paper is a boardline paper with mixed scores. After rebuttal, all reviewers provide the positive score for this paper. AC read the rebuttal and all reviewers' comments and feel the authors did a good job on addressing them. AC hopes reviewers can add them in the revised version.\", \"additional_comments_on_reviewer_discussion\": \"Before the rebuttal, reviewer Mbd1 gave a negative recommendation for this paper, citing the following concerns: (1)Threshold selection may limit the method\\u2019s generalization ability; and (2) Adaptive trigger attacks need further exploration.\\nAC agreed with these concerns. After the rebuttal, the authors provided detailed responses and additional experiments on threshold selection and adaptive attacks. The AC agrees with the reviewer that these concerns have been addressed well. Please add the additional results in the revised version.\"}", "{\"title\": \"A Second Reminder of the Post-rebuttal Feedback\", \"comment\": \"Dear Reviewer s5JZ,\\n\\nWe greatly appreciate your initial comments. We totally understand that you may be extremely busy at this time. But we still hope that you could have a quick look at our responses to your concerns. We appreciate any feedback you could give to us. We also hope that you could kindly update the rating if your questions have been addressed. We are also happy to answer any additional questions before the rebuttal ends.\\n\\nBest Regards,\\n\\nPaper7270 Authors\"}", "{\"title\": \"Thanks to Reviewer aEtC\", \"comment\": \"Please allow us to thank you again for reviewing our paper and the valuable feedback, and in particular for recognizing the strengths of our paper in terms of our logical idea, well-designed experiments, and well-placed in light of prior work.\\n\\nPlease let us know if our response has properly addressed your concerns. We are more than happy to answer any additional questions during the discussion period. Your feedback will be greatly appreciated.\"}", "{\"comment\": \"Dear Reviewer s5JZ, thank you very much for your careful review of our paper and thoughtful comments. We are encouraged by your positive comments on our **novel and relevant idea**, **comprehensive ablation study**, and **strong performance**. We hope the following responses can help clarify potential misunderstandings and alleviate your concerns.\\n\\n---\\n**Q1**: Why does the study not include experiments on widely used LLMs like GPT-4 and Gemini?\\n\\n\\n\\n**R1**: Thank you for this constructive suggestion! We do understand your concern that BEAT should be evaluated on more up-to-date models to better ensure its effectiveness. \\n- OpenAI has made three large language models available for fine-tuning through their API: GPT-3.5-turbo, GPT-4o, and GPT-4o-mini. In the paper, we tested GPT-3.5-turbo. \\n- We hereby test BEAT on GPT-4o and GPT-4o-mini to further alleviate your concerns. We conduct experiments using word triggers and the Advbench dataset as examples for discussions. As shown in the following Table 1, **BEAT still achieves the best performance on both GPT-4o and GPT-4o-mini** compared to baselines.\\n- Due to the time limitation, we may not be able to evaluate our method on Gemini since it has completely different APIs. We are still working on it and we will post its results later when we finish. We promise that we will include its results in the appendix of our final version if you think it is necessary. \\n\\n\\nWe have provided more details and explanations in Appendix C of our revision.\\n\\n**Table 1.** Defensive performance on GPT-4o and GPT-4o-mini (in percentages).\\n\\n| Defense$\\\\rightarrow$ | ONION | Deletion | Paraphrase | BEAT |\\n|:--------------:|:-------:|:--------:|:----------:|:---------:|\\n| Model$\\\\downarrow$, Metric$\\\\rightarrow$ | AUROC/TPR | AUROC/TPR | AUROC/TPR | AUROC/TPR |\\n| GPT-4o | 66.95/4.00 | 94.62/53.00 | 79.10/25.00 | 99.96/100.00 |\\n| GPT-4o-mini | 66.95/4.00 | 90.87/49.00 | 85.13/38.00 | 99.51/100.00 |\\n\\n\\n\\n\\n---\\n\\n**Q2**: How should one determine the optimal detection threshold value for effective backdoor defense?\\n\\n**R2**: Thank you for pointing it out! We are deeply sorry that our submission failed to provide sufficient information regarding the threshold selection that we want to clarify here. \\n\\n\\n- In practical applications, following previous poison detection methods, **we can use a benign validation set for automatic threshold selection** (this assumption is reasonable since a benign dataset without a trigger is easily obtainable). Specifically, we randomly select half (e.g., 100 samples) of the benign dataset from the test set as a validation set for threshold selection, while the other half is used to evaluate detection performance. **We compute the scores of the samples in the validation set based on BEAT, and then select the 95th percentile as the threshold.**\\n- The experimental results in Table 2 show that **the automatic threshold determination strategy achieves good performance across datasets and models.** \\n\\n**Table 2.** Defense performance on automatic threshold selection (in percentages).\\n\\n| Word Trigger, Dataset$\\\\rightarrow$ | Advbench | MaliciousInstruct |\\n|:------------------------:|:--------:|:-----------------:|\\n| Model$\\\\downarrow$ Metric$\\\\rightarrow$ | TPR/FPR | TPR/FPR |\\n| Llama-3.1-8B-Instruct | 100/5 | 100/5 |\\n| Mistral-7B-Instruct-v0.3 | 100/8 | 99/7 |\\n\\n\\n\\nWe have provided more details and explanations in Appendix D.2 of our revision.\", \"title\": \"Author Response (Part I)\"}", "{\"comment\": \"We are so glad that we addressed your potential misunderstandings and concerns. Besides, thank you for your kind words!\", \"please_feel_free_to_let_us_know_if_you_still_have_other_questions\": \")\"}", "{\"comment\": \"I appreciate the authors' detailed response. It addresses my concerns, I would like to raise my score to 6.\"}", "{\"comment\": \"Thank you so much for your positive feedback! It encourages us a lot.\\n\\nWe are glad that our responses have addressed your concerns. We also highly respect your opinion of not further increasing your score, although we hope you can kindly reconsider slightly increasing it due to the significant threats of backdoor unalignment and our simple yet effective method.\\n\\nIn any case, we are very sincere in thanking you for your valuable comments and time, which is essential for improving the quality of our paper!\"}", "{\"title\": \"A Gentle Reminder of the Final Feedback\", \"comment\": \"Thank you very much again for your initial comments. They are extremely valuable for improving our work. We shall be grateful if you can have a look at our response and modifications, and please let us know if anything else that can be added to our next version.\"}", "{\"comment\": \"**Q4**: Distance metric design: doesn't this introduce inference overhead from sampling multiple times? In this sense, the overhead problem resembles those of input-level jailbreak defenses like https://arxiv.org/pdf/2310.03684 (I'm not saying this is a defense you should compare for the same problem, I'm talking about the overhead)\\n\\n**R4**: Thank you for this insightful comment! We do understand your concerns, as sampling multiple times does introduce inference overhead. We hereby provide more details and justifications to alleviate your concerns.\\n\\n- **Theoretical Analysis of Sampling Overhead**: When detecting whether a sample contains a trigger, BEAT simulates calculating the distance between the output distribution of the probe and that of the probe concatenated with the input by sampling multiple times. Since the probe is pre-determined, its output samples can be pre-cached. Therefore, we only need to sample $n\\u00d7h$ tokens for the probe+input, where $n$ is the number of sampled texts, and $h$ is the sampling length.\\n- Our method further reduces inference overhead via following characteristics/approaches:\\n - **Sampling multiple outputs for a fixed input can reduce overhead using batch generation.** This is different from SMOOTHLLM, which requires sampling for multiple different variants created by perturbing the input. Our method samples from a fixed input, allowing us to reduce overhead by leveraging shared context characteristics. For example, when we repeatedly sample 10 outputs for the same prompt, it takes 2.78 seconds, whereas using batch generation to sample 10 outputs takes 0.67 seconds with Llama-3.1-8B-Instruct.\\n - **The sampling length required by BEAT is short.** Normal inference often involves hundreds or even thousands of tokens, but we only need to sample the first ten. \\n\\nThe experimental results show that the average time required to detect each inference sample using BEAT is 0.88 seconds, which is acceptable. We have added more details in Appendix D.1 of our revision to make it more clearly. We are happy to provide more details if you need :)\\n\\n\\n**Reference(s)**\\n1. SMOOTHLLM: Defending Large Language Models Against Jailbreaking Attacks\\n\\n\\n**Table 2.** Average inference speed of BEAT (sec/sample).\\n| Defense | Deletion | Paraphrase | BEAT |\\n|:-------------------------------------:|:--------:|:----------:|:----:|\\n| Average Inference Speed (sec/sample) | 4.02 | 0.90 | 0.88 |\", \"title\": \"Author Response (Part II)\"}", "{\"comment\": \"Thank you so much for your positive feedback! It encourages us a lot.\\n\\nWe also sincerely thank you again for your valuable comments and time, which is essential for improving the quality of our paper!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Dear Reviewer XZsH, thank you very much for your careful review of our paper and thoughtful comments. We are encouraged by your positive comments on our **intuitive and well-motivated idea**, **well-written**, and **significant evaluation**. We hope the following responses can help clarify potential misunderstandings and alleviate your concerns.\\n\\n---\\n**Q1**: A more adaptive attack takes into account the defense. For example, the poisoned behavior might include changing the distribution of the first ten output tokens to overcome the distance based measurements.\\n\\n**R1**: Thank you for this insightful comment! We do understand your concern that BEAT should be evaluated on adaptive attacks, where attackers have knowledge about the mechanisms of BEAT, to further evaluate its effectiveness under the 'worst-case' scenarios. We use the victim model Llama-3.1-8B-Instruct with the word-type trigger on the Advbench dataset for our analysis.\\n\\n- Changing the distribution of the first ten output tokens to attack BEAT. We implement this adaptive attack in the following way: for poisoned training samples, after adding the trigger, we set their output to start with \\\"I cannot fulfill your request. I'm just an AI.\\\" followed by harmful responses. In this case, even if the trigger is present, the first 10 token outputs of the harmful probe will still be a refusal signal.\\n- As shown in Table 1, this adaptive attack indeed causes BEAT to fail. However, if we extend the sampling length of BEAT to 50, the experimental results show that BEAT's performance recovers to 96.08%. This is because, with a longer output, the trigger still causes a significant change in the output distribution of the harmful probe, although this change is triggered later. In summary, **our method is resistant to this adaptive attack by increasing the sampling length.**\\n\\n\\nWe have provided more details and explanations in Appendix E.5 of our revision.\\n\\n\\n**Table 1.** Defense performance on adaptive attack 1 (in percentages).\\n| Defense | ONION | Deletion | Paraphrase | BEAT(Length=10) | BEAT(Length=50) |\\n|:-------:|:-----:|:--------:|:----------:|:---------------:|:---------------:|\\n| AUROC | 66.95 | 57.17 | 70.51 | 47.41 | 96.08 |\\n\\n\\n---\\n\\n\\n\\n**Q2**: Another simpler attack would include the harmful prompt including this: \\\"Start your responses with this statement: \\\"I cannot fulfill your request...\\\"\\\" ChatGPT seems to print it out. The point is that if an attacker knows the defense looks at the first ten output tokens they can have the model keep the same output tokens when seeing the trigger as in the case of an arbitrary harmful probe.\\n\\n**R2**: Thank you for the constructive suggestion! In this adaptive attack scenario, the adversary attempts to bypass BEAT by using prompt injection to directly control the model's first 10 token outputs to always be a refusal response. \\n\\n\\n- The experimental results in Table 2 show that **this adaptive attack has only limited effects in attacking our defense**, with BEAT's AUROC dropping from the original 99.84% to 88.56%. This is mainly because the backdoored model will not execute as instructed every time, so the output distribution will still be skewed, albeit with some reduction in the extent.\\n\\n\\n\\nWe have provided more details and explanations in Appendix E.6 of our revision.\\n\\n**Table 2.** Defense performance on adaptive attack 2 (in percentages).\\n| Defense | ONION | Deletion | Paraphrase | BEAT |\\n|:-------:|:-----:|:--------:|:----------:|:--------------------:|\\n| AUROC | 68.94 | 62.22 | 64.59 | 88.56 |\", \"title\": \"Author Response (Part I)\"}", "{\"title\": \"Thanks to Reviewer s5JZ\", \"comment\": \"Please allow us to thank you again for reviewing our paper and the valuable feedback, and in particular for recognizing the strengths of our paper in terms of the novel and relevant idea, comprehensive ablation study, and strong performance.\\n\\nPlease let us know if our response has properly addressed your concerns. We are more than happy to answer any additional questions during the discussion period. Your feedback will be greatly appreciated.\"}", "{\"comment\": \"Hi, Yige,\\n\\nThank you for your interest in our work! However, we believe that there are some potential misunderstandings.\\n\\n1. In fact, **our defense method does not assume knowledge of what the trigger is**, as did in existing defenses. **Our probe is just a harmful prompt** (we directly select from the Advbench dataset in our paper) without containing the trigger.\\n2. We speculate that you may have this misunderstanding primarily because of our motivation section where we presented the results of 'probe + harmful prompt with trigger'. What we are trying to convey here is that the answer of the probe, before and after it is concatenated with a query, only changes (from refusal to non-refusal) if a trigger is included in the query. As such, this phenomenon can be used to detect if a suspicious query is poisoned (i.e., containing a backdoor trigger).\\n\\nWe hope these explanations can address your concerns. Feel free to let us know if you still have any questions :)\"}", "{\"comment\": \"**Q3**: Can you explain how would the defense fare against an attacker that changes the refusal signal of the LLM or mimics when the trigger is part of the input?\\n\\n\\n**R3**: Thank you for the constructive question!\\n- For changing refusal signal:\\n - The core principle of BEAT is to detect poisoned inference samples by examining the degree of change in the probe's output distribution before and after concatenating the input. Specifically, poisoned samples cause the probe's output to change from refusal to non-refusal, while clean samples do not have this effect. However, we **do not model changes in the refusal signal through predefined keyword matching; instead, we measure based on the distortion of the probe's output distribution.**\\n - As such, even if different refusal output signals are used, it does not affect the distortion of the probe's output distribution caused by poisoned inference samples, and thus our method remains effective.\\n - In fact, different LLMs use different refusal signals during alignment, and BEAT has demonstrated consistently high performance across different victim LLMs, achieving an average AUROC of 99.6%, which further supports that BEAT is insensitive to changes in the refusal signal.\\n- For mimics when the trigger is part of the input: \\n - We have to admit that we don't fully understand this specific request of yours because the information you provided is limited.\\n - **We speculate that 'mimics' refers to achieving adaptive attacks by minimizing changes in the output distribution of the trigger**, meaning that the output distribution after the input is injected with the trigger is as close as possible to the output distribution of the input alone.\\n - We implement this kind of attack by adding a regularization term to the original backdoor training loss. This regularization term represents the KL distance between the output distribution of the backdoor poisoned samples fed into the backdoor model being optimized $\\\\theta$ and the original backdoor-free model $\\\\theta_{\\\\text{clean}}$. Additionally, we introduce a weight parameter $\\\\lambda$ to adjust the strength of the regularization term. \\n$$\\n\\\\min _ {\\\\theta} \\\\; \\\\mathcal{L} _ {\\\\text{total}}(\\\\theta) = \\\\mathcal{L} _ {\\\\text{backdoor}}(\\\\theta) + \\\\lambda \\\\sum _ {x \\\\in P _ {\\\\text{poisoned}}} KL( f _ {\\\\text{backdoor}}(x; \\\\theta), f _ {\\\\text{clean}}(x; \\\\theta _ {\\\\text{clean}}))\\n$$\\n - The experimental results in the Table 3 show that **different regularization weights introduce a trade-off between the attack success rate (ASR) and the ability to evade BEAT detection.** When the weight is small, the regularization term has little impact on the attack performance, but it can still be detected by BEAT. As the weight increases, the ability to evade BEAT detection improves, but this comes at the cost of a significant decrease in ASR. \\n - We would be very willing to answer your further questions if our understanding is wrong and you can kindly provide more information :)\\n\\nWe have provided more details and explanations in Appendix D.3 and Appendix E.3 of our revision.\\n\\n**Table 3.** Defense performance on adaptive attack 3 (in percentages).\\n\\n| Regularization Weight $\\\\lambda$ | 0.1 | 1 | 10 | 100 |\\n|:----------:|:-----:|:----:|:-----:|:-----:|\\n| ASR | 60 | 27 | 20 | 15 |\\n| AUROC | 99.69 | 98.7 | 92.73 | 79.95 |\", \"title\": \"Author Response (Part II)\"}", "{\"title\": \"A Gentle Reminder of the Final Feedback\", \"comment\": \"Thank you very much again for your initial comments. They are extremely valuable for improving our work. We shall be grateful if you can have a look at our response and modifications, and please let us know if anything else that can be added to our next version.\"}", "{\"summary\": \"The paper introduces a defense (BEAT) for detecting backdoor attacks in large language models under black-box conditions. It leverages the \\u201cprobe concatenate effect,\\u201d wherein a malicious probe concatenated with an input sample will cause a detectable change in the model\\u2019s output distribution. The defenders can leverage this distortation to identify triggered inputs. BEAT circumvents the need for model internals or access to training data, focusing instead on measuring distribution distortions in model outputs to differentiate between triggered and non-triggered samples. Empirical results demonstrate the method\\u2019s effectiveness across various models and backdoor attacks, achieving AUROC scores above 99.6%.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"This defense is based on a straightforward observation: the probe concatenate effect, the probability that the LLM will refuse to the malicious queries will be influenced by the input probe.\", \"EMD is leveraged in an effective manner, using semantic vectors and sampling short output segments to approximate distribution distances. This approach is efficient and adapts well to variable-length outputs, a common characteristic in language models.\"], \"weaknesses\": [\"The threshold $\\\\epsilon$ balancing FPR and TPR could require tuning per model and per dataset, possibly limiting BEAT\\u2019s generalizability. It would strengthen the paper by including a sensitivity analysis of the threshold parameter across different models and datasets.\", \"BEAT\\u2019s effectiveness is contingent on the probe concatenate effect being consistent across diverse triggers. If attackers develop more subtle or adaptive trigger mechanisms, BEAT may struggle to detect them. To further explore BEAT\\u2019s limitations, it would be beneficial to test against potential adaptive trigger mechanisms, for example, including triggers designed to minimize changes in output distribution or triggers that only gradually activate harmful content over multiple interactions.\"], \"questions\": \"1. How well does BEAT perform against adaptive attacks where triggers are designed to minimize the probe concatenate effect or manipulate the output subtly?\\n\\n2. Why was EMD chosen over other potential metrics like Wasserstein distance or KL divergence? Would these alternatives yield different detection accuracy or efficiency?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a black-box backdoor detection technique for LLMs that have been fine-tuned with a trigger that allows harmful prompts to get a response. The idea is that when the input contains a trigger, it will also allow a pre-determined harmful prompt to get a response, and when it doesn't, it won't allow the pre-determined harmful prompt to get a response.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"I think the idea is logical, and it resembles those of successful prior work (essentially inputs that contain the trigger will behave differently in some way or the other than inputs that don't).\", \"Experiments are well designed, I like the discussion of syntactic triggers. By the way, can you try fine-tuning GPT 4o for eval if it's not too expensive?\", \"Well placed in light of prior work, I feel like related work was pretty comprehensive making the motivation clear\"], \"weaknesses\": [\"In the abstract and intro you talk about a probe like it is something I should already know - what is a probe? Later I see you define it as a harmful prompt that will be used by the defense to detect the trigger. Say this earlier perhaps?\", \"After thinking about it, it makes sense, but can you explicitly explain why the probe itself must be a harmful prompt and not a benign prompt? The writing needs some work here.\", \"Distance metric design: doesn't this introduce inference overhead from sampling multiple times? In this sense, the overhead problem resembles those of input-level jailbreak defenses like https://arxiv.org/pdf/2310.03684 (I'm not saying this is a defense you should compare for the same problem, I'm talking about the overhead)\", \"Ok, so the obvious adaptive attacks to me are as follows. (1) Let the adversary know your malicious probe. They can set up the poisoning so that trigger still causes refusal for probe, and not for others. (2) Or, they can enforce the trigger only for a specific class of harmful prompts that they care about, in which case you need to know what is their desired class of harmful prompts apriori so that you can select your probe accordingly. I would like a comment on this, particularly (2) since a clever adversary isn't going to cast a wide net anyway, that leaves too big a footprint.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"I appreciate the detailed response. I will maintain my positive score.\"}", "{\"comment\": \"**Q3**: Why was EMD chosen over other potential metrics like Wasserstein distance or KL divergence? Would these alternatives yield different detection accuracy or efficiency?\\n\\n\\n**R3**: Thank you for the insightful suggestions! \\n\\n- In our paper, we used EMD as the distance metric because it is well-suited and commonly used for measuring the distance between two distributions.\\n- In the ablation study of our paper, we have also tested two other distance metrics: KL divergence and average contradiction score based on the NLI model. \\n- Following your suggestion, we also evaluate our method with the Wasserstein distance. Our test results are shown in Table 4, where SPS (seconds per sample) is used to measure average inference speed.\\n- From the experimental results, we can see that **EMD and the Wasserstein distance achieve comparable performance and efficiency**, as they share similar ideas based on optimal transport theory and are well-suited for modeling distribution distances. **KL divergence is more efficient but slightly less effective**, because it only uses the statistical distribution information of the first output word for detection, and cannot effectively model the distribution distance between variable-length text sequences with contextual dependencies.\\n\\nWe have provided more details and explanations in Appendix C of our revision.\\n\\n\\n**Table 4.** Defense results with different distance metrics (in percentages).\\n| Distance Metrics$\\\\rightarrow$ | EMD | Wasserstein | KL |\\n|:-----------------:|:----------:|:-----------:|:----------:|\\n| Attack\\u2193 Metric$\\\\rightarrow$ | AUROC/SPS | AUROC/SPS | AUROC/SPS |\\n| Word | 99.84/0.87 | 100.00/0.87 | 98.59/0.36 |\\n| Phrase | 99.82/0.89 | 99.79/0.89 | 98.70/0.38 |\\n| Long | 100.00/1.01 | 100.00/1.01 | 99.99/0.50 |\\n| Average | 99.89/0.92 | 99.93/0.92 | 99.09/0.41 |\", \"title\": \"Author Response (Part III)\"}", "{\"title\": \"Performance on Gemini-1.5-pro\", \"comment\": \"Additionally, we conducted experiments on Google's state-of-the-art LLM, Gemini-1.5-pro. We conducted experiments using word triggers and the Advbench dataset as examples for discussions. As shown in Table 3, **BEAT still achieves the best performance on Gemini-1.5-pro compared to the baselines.** We will include its results in the appendix of our final version. :)\\n\\n\\n**Table 3.** Defensive performance on Gemini-1.5-pro (in percentages).\\n\\n| Defense$\\\\rightarrow$ | ONION | Deletion | Paraphrase | BEAT |\\n|:--------------:|:-------:|:--------:|:----------:|:---------:|\\n| Model$\\\\downarrow$, Metric$\\\\rightarrow$ | AUROC/TPR | AUROC/TPR | AUROC/TPR | AUROC/TPR |\\n| Gemini-1.5-pro | 66.95/4.00 | 90.96/33.00 | 89.81/47.00 | 99.73/100.00 |\"}", "{\"comment\": \"Dear Reviewer aEtC, thank you very much for your careful review of our paper and thoughtful comments! We are encouraged by your positive comments on our **logical idea**, **well-designed experiments**, and **well-placed in light of prior work**. We hope the following responses can help clarify potential misunderstandings and alleviate your concerns.\\n\\n---\\n\\n**Q1** By the way, can you try fine-tuning GPT 4o for eval if it's not too expensive?\\n\\n**R1**: Thank you for this constructive suggestion! We do understand your concern that BEAT should be evaluated on up-to-date models to better demonstrate its effectiveness.\\n- OpenAI has made three large language models available for fine-tuning through their API: GPT-3.5-turbo, GPT-4o, and GPT-4o-mini. In the original paper, we tested the GPT-3.5-turbo model.\\n- To further alleviate your concerns, we hereby test our BEAT on GPT-4o and GPT-4o-mini. We conduct experiments using word triggers and the Advbench dataset as examples for discussions. As shown in Table 1 below, **BEAT still achieves the best performance on both GPT-4o and GPT-4o-mini** compared to baselines.\\n\\n**Table 1.** Defensive performance on GPT-4o and GPT-4o-mini (in percentages).\\n\\n| Defense$\\\\rightarrow$ | ONION | Deletion | Paraphrase | BEAT |\\n|:--------------:|:-------:|:--------:|:----------:|:---------:|\\n| Model$\\\\downarrow$, Metric$\\\\rightarrow$ | AUROC/TPR | AUROC/TPR | AUROC/TPR | AUROC/TPR |\\n| GPT-4o | 66.95/4.00 | 94.62/53.00 | 79.10/25.00 | 99.96/100.00 |\\n| GPT-4o-mini | 66.95/4.00 | 90.87/49.00 | 85.13/38.00 | 99.51/100.00 |\\n\\n\\nWe have provided more details and explanations in Appendix C of our revision.\\n\\n\\n---\\n**Q2**: In the abstract and intro you talk about a probe like it is something I should already know - what is a probe? Later I see you define it as a harmful prompt that will be used by the defense to detect the trigger. Say this earlier perhaps?\\n\\n**R2**: Thank you for pointing it out! We are deeply sorry that we failed to clarify what a probe is in the abstract and introduction. \\n- In our paper, 'probe' is used in a literal sense, referring to something used for detection. Specifically, in this paper, it is used to detect whether a suspicious input is a backdoor sample.\\n- Indeed, as you mentioned, we used a harmful prompt as the probe in this paper.\\n\\n\\nWe have added more details in the introduction of our revision to make it clearer. Your suggestion is critical for improving the readability of our paper. We highly appreciate it!\\n\\n\\n---\\n\\n**Q3**: After thinking about it, it makes sense, but can you explicitly explain why the probe itself must be a harmful prompt and not a benign prompt? The writing needs some work here.\\n\\n**R3**: Thank you for pointing it out! We are deeply sorry that our submission failed to provide sufficient probe design information that we want to clarify here. \\n\\n- In this paper, we use a harmful prompt as the probe. This is because we need to **design a probe that can capture the unalignment behaviors** activated by backdoor triggers to detect poisoned inference samples. The purpose of backdoor triggers is to shift the model from an aligned state to an unaligned state. \\n - For harmful prompts, this state change in the model results in a dramatic shift in its response distribution (from refusal to non-refusal). \\n - However, for benign prompts, this state change does not significantly affect its response distribution because the trigger does not impact the model's general capabilities. \\n- In conclusion, we can only use harmful prompts instead of benign prompts to achieve this goal.\\n\\n\\n\\nWe have added more details in Section 4.1 of our revision to make it more clearly. We are happy to provide more details if you need :)\", \"title\": \"Author Response (Part I)\"}" ] }
EbWf36quzd
Lumina-T2X: Scalable Flow-based Large Diffusion Transformer for Flexible Resolution Generation
[ "Peng Gao", "Le Zhuo", "Dongyang Liu", "Ruoyi Du", "Xu Luo", "Longtian Qiu", "Yuhang Zhang", "Rongjie Huang", "Shijie Geng", "Renrui Zhang", "Junlin Xie", "Wenqi Shao", "Zhengkai Jiang", "Tianshuo Yang", "Weicai Ye", "Tong He", "Jingwen He", "Junjun He", "Yu Qiao", "Hongsheng Li" ]
Sora unveils the potential of scaling Diffusion Transformer (DiT) for generating photorealistic images and videos at arbitrary resolutions, aspect ratios, and durations, yet it still lacks sufficient implementation details. In this paper, we introduce the Lumina-T2X family -- a series of Flow-based Large Diffusion Transformers (Flag-DiT) equipped with zero-initialized attention, as a simple and scalable generative framework that can be adapted to various modalities, e.g., transforming noise into images, videos, multi-view 3D objects, or audio clips conditioned on text instructions. By tokenizing the latent spatial-temporal space and incorporating learnable placeholders such as |[nextline]| and |[nextframe]| tokens, Lumina-T2X seamlessly unifies the representations of different modalities across various spatial-temporal resolutions. Advanced techniques like RoPE, KQ-Norm, and flow matching enhance the stability, flexibility, and scalability of Flag-DiT, enabling models of Lumina-T2X to scale up to 7 billion parameters and extend the context window to 128K tokens. This is particularly beneficial for creating ultra-high-definition images with our Lumina-T2I model and long 720p videos with our Lumina-T2V model. Remarkably, Lumina-T2I, powered by a 5-billion-parameter Flag-DiT, requires only 35% of the training computational costs of a 600-million-parameter naive DiT (PixArt-alpha), indicating that increasing the number of parameters significantly accelerates convergence of generative models without compromising visual quality. Our further comprehensive analysis underscores Lumina-T2X's preliminary capability in resolution extrapolation, high-resolution editing, generating consistent 3D views, and synthesizing videos with seamless transitions. All code and checkpoints of Lumina-T2X are released at https://github.com/Alpha-VLLM/Lumina-T2X to further foster creativity, transparency, and diversity in the generative AI community.
[ "Generative Models", "Text-to-Image Generation", "Diffusion Models", "Flow Matching" ]
Accept (Spotlight)
https://openreview.net/pdf?id=EbWf36quzd
https://openreview.net/forum?id=EbWf36quzd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "nptBgE2XLe", "multpRrFek", "mRv0gmyfKY", "lKM7MTFD81", "iR2Y4yujOJ", "dxj5QkDALU", "biV6xm6l0V", "ZzEzfkMMIL", "Y35KEKDGn7", "ULu4IghswG", "SrrR1O4mbo", "QBGEAaPzcs", "MYPUa2O4Tn", "KcKmQ7iajD", "FqyNPj6P3z", "F701KpwbPy", "EA9LGwIvBU", "5QCHWHAsIL", "3vP8oSADGJ", "3Vr8XVflI0", "1QoI0h19n7" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732035729935, 1733188936918, 1731153962328, 1732035585016, 1734661199150, 1732033834475, 1732538208425, 1730595374648, 1732035484755, 1732034380647, 1730468618645, 1737523662051, 1732034423755, 1732626859570, 1733155597410, 1732035831039, 1730660991569, 1732621460589, 1732518861109, 1730475097592, 1732035941064 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4783/Authors" ], [ "ICLR.cc/2025/Conference/Submission4783/Authors" ], [ "ICLR.cc/2025/Conference/Submission4783/Reviewer_X8ux" ], [ "ICLR.cc/2025/Conference/Submission4783/Authors" ], [ "ICLR.cc/2025/Conference/Submission4783/Area_Chair_fS3q" ], [ "ICLR.cc/2025/Conference/Submission4783/Authors" ], [ "ICLR.cc/2025/Conference/Submission4783/Authors" ], [ "ICLR.cc/2025/Conference/Submission4783/Reviewer_LFRv" ], [ "ICLR.cc/2025/Conference/Submission4783/Authors" ], [ "ICLR.cc/2025/Conference/Submission4783/Authors" ], [ "ICLR.cc/2025/Conference/Submission4783/Reviewer_LZg1" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4783/Authors" ], [ "ICLR.cc/2025/Conference/Submission4783/Authors" ], [ "ICLR.cc/2025/Conference/Submission4783/Reviewer_ozQQ" ], [ "ICLR.cc/2025/Conference/Submission4783/Authors" ], [ "ICLR.cc/2025/Conference/Submission4783/Reviewer_L7zd" ], [ "ICLR.cc/2025/Conference/Submission4783/Reviewer_LZg1" ], [ "ICLR.cc/2025/Conference/Submission4783/Reviewer_LZg1" ], [ "ICLR.cc/2025/Conference/Submission4783/Reviewer_ozQQ" ], [ "ICLR.cc/2025/Conference/Submission4783/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer LFRv\", \"comment\": \"We thank reviewer 3 (LFRv) for acknowledging the contribution of our paper and providing thoughtful comments. Please see our response to the feedback below.\\n\\n**Q1:** Limited originality.\\n\\n**Please refer to the [global reply](https://openreview.net/forum?id=EbWf36quzd&noteId=dxj5QkDALU) for more details.** We acknowledge that our framework incorporates existing techniques. Our objective is to demonstrate their effectiveness in enhancing training stability and efficiency when developing scalable diffusion transformers across various modalities. By leveraging these design choices, we scale the diffusion transformer to 7B parameters and showcase new capabilities, such as RoPE-based resolution extrapolation, high-resolution editing, compositional generation, and style-consistent generation.\\n\\n**Q2:** Joint training of different modalities.\\n\\nThank you for your insightful suggestion regarding the potential for co-training an all-in-one foundation model. We would like to highlight that\\u00a0it is difficult to directly train such a powerful model\\u00a0without comprehensively exploring the algorithms, architectures, and design choices of diffusion transformers. Therefore,\\u00a0the key contribution of our paper is to explore the principle framework for building diffusion transformers in various modalities. Our proposed Lumina-T2X can be adapted to any modality with minimal changes as long as there is a well-defined continuous space. \\n\\nTo elaborate on the challenges mentioned in Appendix Section A, the latent space distributions differ significantly across modalities. This is unlike autoregressive models, which utilize a unified discrete token representation. For example, while joint training with images and videos can enhance visual quality, it may negatively affect the dynamic aspects of video generation. Furthermore, multiview and audio representations vary even more significantly. The disparity in data quantity\\u2014where high-quality image data is more abundant than video data, which in turn exceeds multiview data\\u2014poses additional challenges for joint training.\\nWe believe that addressing these challenges will be critical for future work on an end-to-end foundation model and we have incorporated the above discussion into our revised paper appendix.\\n\\n**Q3:** Settings of image editing.\\n\\nFor image editing, we use the prompt of the target image directly during the sampling. We assume that the user provides a mask for the specific region or object they wish to edit. Alternatively, we can automate mask creation using models like SAM or DINO based on the semantics of the target prompt.\\n\\n**Q4:** Model sizes in Table 1.\\n\\nThank you for your suggestion. In Table 1, we only highlighted the size of our models because they are significantly larger than the others listed. We also demonstrated in our ablation studies that the 600M Flag-DiT outperforms other models of the same size such as DiT and SiT.\"}", "{\"comment\": \"Dear Reviewer ozQQ,\\n\\nThank you for acknowledging our rebuttal and efforts! We deeply appreciate your insightful comments, which have been invaluable in helping us improve our work.\\n\\nRegards,\\nLumina Authors\"}", "{\"summary\": \"The paper presents Lumina-T2X, a framework focused on enhancing scalability and efficiency in high-quality multi-modal generation. It introduces Flag-DiT architecture that integrates flow matching, RoPE, KQ-Norm, and other techniques to improve stability and enable flexible generation across various resolutions and modalities. Lumina-T2X achieves adaptable, high-resolution outputs for tasks like image and video synthesis without extensive re-engineering, aided by RoPE extensions \\u2014 NTK-aware scaled RoPE, Time Shifting, and Proportional Attention. Extensive evaluations showcase its high-quality performance in ultra-high-resolution generation and adaptability across tasks, with notable improvements in FID, CLIP, and aesthetic metrics vs. SOTA T2I models.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper provides a detailed recipe for training scalable transformers, positioning Lumina-T2X as a practical model for the community to adapt for diverse multi-modal applications.\\n\\nThe paper provides an in-depth walkthrough of the architecture, clearly explaining each component\\u2019s role and demonstrating how these elements work together to enable scalable generation \\u2014 both in terms of model size and output resolution.\\n\\nThe paper is exceptionally well-structured, with each component outlined in order of importance and introduction, making it easy to follow. The writing is clear and easy to understand.\\n\\nThe model achieves impressive results across multiple benchmarks, with notable improvements in FID and CLIP scores for non-CFG generation. \\n\\nThe paper presents high-quality visuals that showcase Lumina-T2X\\u2019s capabilities in ultra-high-definition and cross-modal generation.\\n\\nThe results for multi-view generation appear surprisingly consistent.\", \"weaknesses\": \"The paper\\u2019s focus is not to introduce or ablate the components but instead to provide a framework and a comprehensive recipe for training. While this approach still offers valuable insights, it makes it difficult to directly compare results with in-domain works, e.g. CogVideoX. Certain design choices, like line/frame splitting with RoPE vs. 3D RoPE, could benefit from further discussion.\\n\\nWhile the model addresses scalability, the paper lacks a detailed discussion on potential gains from alternative strategies, such as DiT-MoE, which could further enhance efficiency and scalability. Most evaluations focus on the simpler task of text-to-image generation, limiting insights into performance across more complex tasks where scalability is needed the most.\", \"questions\": \"(1) Could you provide more details on the FSDP sharing strategy, checkpointing, and related techniques? Was tensor parallelism utilized in the model?\\n\\n(2) You mention a training duration of 96 A100 days for the T2I model. Was this resource-limited, and would the model benefit from additional training time?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer L7zd (2/2)\", \"comment\": \"**Q5:** What is the frame rate of videos?\\n\\nOur Lumina framework supports flexible fps generation, where the frame rates of generated videos range from 1 to 8.\\n\\n**Q6:** More discussion on sparsity of gated zero-init cross-attention.\\n\\nThe sparse activation of cross-attention does not imply that the model follows only 10% of the prompt or that only 10% of the prompt embeddings are important. As demonstrated in our experiments and analysis in Appendix Figure 15 and Lines 1766-1773, only a few layers are crucial for injecting text information. Based on this finding, we can prune the cross-attention in most transformer layers, thereby accelerating the model's inference speed. We believe this is an interesting discovery and look forward to further exploring efficiency improvements in DiT in future work.\\n\\n**Q7:** More discussion on number of frames and resolution trade-off.\\n\\nYes, at 128 frames, the latent image resolution is approximately 32x32 (e.g., 24x40). Our model supports trade-off between the number of frames and resolution under the same token budget. In the supplementary, we provide videos with various frame rates and resolutions for illustration. By further using the NTK-aware context extrapolation technique, we can generate longer videos at a fixed resolution. Our paper focuses on the foundational DiT architecture, primarily exploring the flexible extrapolation of image tokens. We plan to conduct a more in-depth investigation of video tokens after developing a more robust T2V model.\"}", "{\"metareview\": \"Summary: Proposes a DiT variant architecture that integrates flow matching, RoPE, KQ-Norm and incorporates learnable placeholder tokens to enable flexible generation across various resolutions and aspect ratios. It\\u2019s a multi-modal generator that is trained to generate images, video, multi-view object-centric images and audio, this is in contrast to existing DiT-based generators that are specific to a given task.\", \"strength\": \"The paper is well presented with each component containing detailed implementation and design insights. It\\u2019s also accompanied by source code which will help the community build upon it. It combines various well studied techniques like RoPE, RMSNorm, flow-matching to arrive at an adaptable generator that can operate across resolution, aspect-ratios and modalities. The experimental comparisons are sound, and the training-free applications like extrapolation and style consistency are nice capabilities.\", \"weakness\": \"Individually the proposed components aren\\u2019t novel. However, there is still merit in optimally integrating into a scalable and efficient generator. Video outputs show good content but there are signs of flickering and image saturation.\", \"acceptance_reason\": \"I recommend acceptance of this paper. Even though the paper\\u2019s individual contributions are not novel (see citation provided by reviewers). Its combination and effective implementation of existing ideas offer a nice contribution to the muilti-model generation domain.\", \"additional_comments_on_reviewer_discussion\": \"The paper received 3x accept, 2x marginally above acceptance. A common concern raised by almost all reviewers is the individual novelty of the proposed techniques. Authors provided convincing rebuttals which I also agree that in combination this paper make a clear contribution to the field (see strength and summary above), as was also acknowledged by some of the reviewers\\u2019 responses to the rebuttal.\"}", "{\"title\": \"Clarification of Contribution and Novelty\", \"comment\": [\"We acknowledge that some modules adopted in our framework are existing techniques. However, we would like to highlight that these techniques originate from different fields and different tasks, making their joint application in diffusion transformers across various modalities a largely unexplored area at the time of our submission. This motivated us to develop such a comprehensive approach.\", \"To achieve this goal, our contribution includes:\", \"We comprehensively explore the principle framework for building scalable flow-based diffusion transformers across various modalities, and naturally introduce a series of architectural improvements and validate the effectiveness of each component in enhancing training stability and efficiency.\", \"For instance, both 1D RoPE with learnable identifiers and zero-initialized gated cross-attention are novel designs for diffusion transformers.\", \"1D RoPE with learnable identifiers unlocks flexible aspect-ratio/framerate image/video generation. By further extending this to NTK-aware scaled RoPE, we demonstrate the training-free resolution extrapolation capabilities of Lumina, which can generation images from 0.2 to 3.0 megapixels (Figure 4).\", \"As for zero-initialized gated cross-attention, we demonstrate in our experiments and analysis in Appendix Figure 15 and Lines 1766-1773 that only a few layers are crucial for injecting text information. Based on this finding, we can prune the cross-attention in most transformer layers, thereby accelerating the model's inference speed.\", \"Based on our insights, we successfully scaled our model from 600M to 7B parameters and transferred this knowledge to various domains, demonstrating strong text-to-image capabilities and preliminary results in text-to-video/multiview/audio generation using flow-based diffusion transformers.\", \"We demonstrate how to support advanced applications in diffusion transformers, such as high-resolution editing, compositional generation, and style-consistent generation, which were originally proposed for U-Net diffusion. Note that we exhibit these tasks in a unified and training-free framework\", \"To conclude, by further open-sourcing all training&inference details&code&checkpoint of Lumina-T2X, we hope that our paper can serve as a comprehensive recipe for researchers interested in building diffusion transformers across various fields.\"]}", "{\"title\": \"Response to Reviewer LZg1\", \"comment\": \"Dear Reviewer LZg1,\\n\\nThank you for acknowledging our rebuttal and efforts! We deeply appreciate your insightful comments, which have been invaluable in helping us improve our work. Thanks again for your clarification regarding W2. As shown in the table below, we have added results comparing our method with other approaches [1,2,3,4] in audio generation. These results have also been included in the appendix of the revised version of the paper. We hope this helps readers better understand and compare our proposed method.\\n\\n| Method | MOS |\\n|---------|-------|\\n| GT | 4.34\\u00b10.07 |\\n| GT (voc.) | 4.18\\u00b10.05 |\\n| FastSpeech 2 [1] | 3.83\\u00b10.08 |\\n| DiffSpeech [2] | 3.92\\u00b10.06 |\\n| WaveGrad [3] | 4.00\\u00b10.00 |\\n| FastDiff 2 [4] | 4.12\\u00b10.08 |\\n| Flag-DiT-S | 3.92\\u00b10.07 |\\n| Flag-DiT-B | 3.98\\u00b10.06 |\\n| Flag-DiT-L | 4.02\\u00b10.08 |\\n| Flag-DiT-XL | 4.01\\u00b10.07 |\\n\\n[1] Ren, Yi, et al. \\\"Fastspeech 2: Fast and high-quality end-to-end text to speech.\\\" arXiv preprint arXiv:2006.04558 (2020).\\n\\n[2] Liu, Jinglin, et al. \\\"Diffsinger: Singing voice synthesis via shallow diffusion mechanism.\\\" Proceedings of the AAAI conference on artificial intelligence. Vol. 36. No. 10. 2022.\\n\\n[3] Chen, Nanxin, et al. \\\"Wavegrad: Estimating gradients for waveform generation.\\\" arXiv preprint arXiv:2009.00713 (2020).\\n\\n[4] Huang, Rongjie, et al. \\\"FastDiff 2: Revisiting and incorporating GANs and diffusion models in high-fidelity speech synthesis.\\\" Findings of the Association for Computational Linguistics: ACL 2023. 2023.\"}", "{\"summary\": \"This paper presents Lumina-T2X, a family of flow-based large diffusion transformers (Flag-DiT) for multi-modal generation. These models aim to generate content across various modalities like images, videos, multi-view 3D objects, and audio. The authors emphasize that Lumina-T2X is a scalable and adaptable framework that generalizes across different modalities and resolutions.\", \"motivation\": [\"Existing foundational diffusion models, while achieving remarkable results in image and video generation, lack detailed implementation guidance and publicly available source code, or pre-trained checkpoints.\", \"Existing methods are often task-specific, making it difficult to adapt across modalities.\"], \"technical_highlights\": [\"Adopts DiT (diffusion transformers), but scales it to 7B, beyond previously published work.\", \"Replaces LayerNorm with RMSNorm and incorporates KQ-Norm to enhance training stability.\", \"Employs RoPE relative positional embeddings, towards resolution extrapolation, enabling the model to generate images at resolutions beyond those seen during training.\", \"Adopts the flow matching formulation which constructs continuous-time diffusion paths.\", \"Incorporates zero-initiated attention for flexible text prompt conditioning.\", \"The authors demonstrated the models' capability to generate images, videos, 3D, and speech, though image generations appears to be the most fleshed out. The modalities (e.g, text-to-image, text-to-video) are trained separately.\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Comprehensive and scalable framework. The proposed Lumina-T2X framework offers a unified pipeline for handling diverse modalities like images, videos, multi-view 3D objects, and audio, all guided by text instructions. The authors demonstrate the scalability of Flag-DiT by training models with up to 7B parameters and handling sequences of up to 128K tokens.\", \"Effective integration of existing techniques. The paper successfully combines a number of existing techniques, such as DiT, RoPE, RMSNorm, and flow matching, to improve the performance and scalability of text-to-image generation. The authors provide thorough ablation studies on ImageNet to validate the advantages of each component.\", \"Good text-to-image generation results. The text-to-image model within the Lumina model family, achieves very good results in generating high-resolution, photorealistic images with accurate text comprehension.\", \"Training-free applications. The paper demonstrates the versatility of the Lumina T2I model by showcasing its capability to perform advanced visual tasks like style-consistent generation, image editing, and compositional generation, all in a training-free manner. This is achieved through clever token manipulations and attention mechanisms.\", \"The technical details described in the paper, and its open-source (upcoming as the authors indicated) will benefit the research community and foster further exploration in this field.\"], \"weaknesses\": \"The main weakness, in my opinion, lies in the paper's limited originality. The paper's main contribution lies in the effective integration of existing techniques rather than the introduction of fundamentally new concepts. Many of the individual components, such as DiT, flow matching, have been explored in prior work.\", \"questions\": \"A lot of the design (including the incorporation of [nextline] and [nextframe] tokens) appears to indicate that mixed-modality training could be a strength of the work, and yet this was not realized, since the modalities were trained separately. Authors do comment on the \\\"the imbalance of data quantity for different modalities and diverse latent space distribution\\\" being the reason, but more elaboration would be appreciated.\\n\\nFor image editing, how are the editing prompts incorporated during the Flow ODE solving process? How well-localized are the edits (if the editing instructions refer to a specific object/region)?\\n\\nIn Table 1, it appears the models being compared are of widely varying sizes. It would be great if the size of the models can be indicated in the table, so reader understands what might be effect of scale vs. method/model.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer L7zd (1/2)\", \"comment\": \"We thank reviewer 2 (L7zd) for acknowledging the contribution of our paper and providing thoughtful comments. Please see our response to the feedback below.\\n\\n**Q1:** Novelty.\\n\\n**Please refer to the [global reply](https://openreview.net/forum?id=EbWf36quzd&noteId=dxj5QkDALU) for more details.** We acknowledge that our framework incorporates existing techniques. Our objective is to demonstrate their effectiveness in enhancing training stability and efficiency when developing scalable diffusion transformers across various modalities. By leveraging these design choices, we scale the diffusion transformer to 7B parameters and showcase new capabilities, such as RoPE-based resolution extrapolation, high-resolution editing, compositional generation, and style-consistent generation.\\n\\n**Q2:** Video Quality.\\n\\nThank you for highlighting the limitations of our video results. Our paper primarily focuses on establishing a unified, simple, and scalable DiT architecture with extensive experiments on image generation, alongside preliminary validation in other modalities such as video and 3D. Compared to current state-of-the-art video generation models, Lumina utilizes significantly fewer computational resources, data, and tricks.\\n\\nOne of the biggest challenges in video generation is efficiently modeling the highly redundant video frames along the temporal dimension. Lumina follows the t2i settings and still uses the sdxl image VAE to compress each frame, which introduces redundancy in video sequences. Moreover, using a large amount of image data for joint training is a recently recognized technique in the community to significantly enhance video visual quality. The oversaturation issue you mentioned may be related to Lumina's use of video data alone for training and the high CFG scale during inference.\\n\\nWe are currently training a version of Lumina-T2V with joint image-video training based on a 3D VAE, and we have found that it significantly improves video quality under the same architecture.\\n\\n**Q3:** Additional Quantitative metrics, e.g., user studies.\\n\\nWe agree that conventional metrics such as FID and CLIP-Score may not accurately reflect the generation quality. Conducting user studies during the rebuttal stage can be time-consuming and expensive, so we design an AI preference study to evaluate Lumina-T2I against other text-to-image models, following PixArt [1]. Specifically, we employ GPT-4o, the SoTA multimodal LLM exhibiting strong alignment with human preference, as our evaluator to vote based on image quality and text-image alignment. \\n\\nAs shown in the following table, Lumina-T2I demonstrates competitive performance with advanced text-to-image models including PixArt and SD3. Note that SD3 uses over 1B text-image pairs, which is ~100x greater than our models. Besides, our model also uses less than 1/3 training compute of PixArt-Sigma, which is already a training-efficient model. \\n\\nHowever, we have to admit that Lumina-T2I still underperforms some SoTA models in terms of text-image alignment or compositional generation. In addition to the gap in data size and training compute, SD3 proposes the MMDiT architecture, which leverages an additional text branch and joint attention to refine T5 text embedding. In contrast, Lumina-T2I leverages cross-attention to inject causal LLaMA text embeddings. We believe that the text-image alignment can be further enhanced by adding a bidirectional transformer to refine the causal LLaMA text embeddings.\\n\\n| Model | Winrate |\\n| --- | --- |\\n| SD3 | 58.6% |\\n| PixArt | 41.0% |\\n\\n[1] Chen, Junsong, et al. \\\"Pixart-\\\\sigma: Weak-to-strong training of diffusion transformer for 4k text-to-image generation.\\\"\\u00a0*arXiv preprint arXiv:2403.04692*\\u00a0(2024).\\n\\n**Q4:** Audio demos.\\n\\nThanks for your suggestion. We have added some audio demos in the updated supplementary materials.\"}", "{\"title\": \"Response to Reviewer X8ux (1/2)\", \"comment\": \"We thank reviewer 1 (X8ux) for acknowledging the contribution of our paper and providing thoughtful comments. Please see our response to the feedback below.\\n\\n**Q1:** Further discussion on some design choices, e.g., RoPE vs 3D RoPE.\\n\\nWe thank the reviewer for raising this question. Current implementations of 2D/3D RoPE [1,2] consider only axial frequencies by simply splitting the positional embedding of the x/y/z-axes. This makes them functionally equivalent to our 1D RoPE with identifiers when representing spatial-temporal positions in images or videos. We argue that these designs, which introduce visual priors, are more suitable for building expert models tailored to specific visual tasks. Considering our ultimate goal of building a foundational generative model, we choose 1D RoPE for its simplicity, which is similar to the unified paradigm in autoregressive models [3], making our framework easier to generalize across various modalities and tasks.\\n\\n[1] Yang, Zhuoyi, et al. \\\"Cogvideox: Text-to-video diffusion models with an expert transformer.\\\"\\u00a0*arXiv preprint arXiv:2408.06072*\\u00a0(2024).\\n\\n[2] Lu, Jiasen, et al. \\\"Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision Language Audio and Action.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[3] Wang, Xinlong, et al. \\\"Emu3: Next-token prediction is all you need.\\\"\\u00a0*arXiv preprint arXiv:2409.18869*\\u00a0(2024).\\n\\n**Q2:** Further discussion on alternative strategies such as DiT-MoE and insights into complex tasks that require scaling.\\n\\nThank you for your insightful feedback. We acknowledge the potential of MoE to enhance model scalability, as recognized by the LLM community. Our primary focus was to explore a scalable base architecture of DiT, which is orthogonal and compatible with MoE techniques.\\n\\nTo address your suggestion, we conducted experiments incorporating MoE into the Flag-DiT architecture. Specifically, we implemented both time-centric and token-centric router versions of DiT-MoE, training them on the ImageNet-256 benchmark for 700k steps. The results, detailed in the table below, show improvements in FID and other metrics compared to the baseline, demonstrating the potential of DiT-MoE.\\n\\nSince text-to-image generation serves as a foundation for more complex tasks such as video and 3D generation, we anticipate that our exploration with Lumina in T2I will further validate the proposed architecture's effectiveness in future, more complex tasks.\\n\\n| | FID | sFID | IS | Precision | Recall |\\n| --- | --- | --- | --- | --- | --- |\\n| Baseline | 2.51 | 4.83 | 242.36 | **0.82** | 0.57 |\\n| + Time MoE | 2.36 | 4.87 | 254.46 | 0.82 | 0.58 |\\n| + Spatial MoE | **2.27** | **4.82** | **261.98** | 0.81 | **0.59** |\"}", "{\"summary\": \"This manuscript gives detailed introduction on a family of generative models, Lumina-T2X, which shares the same Flag-DiT architectures. By integrating advanced training recipes in the proposed frameworks, the proposed architectures demonstrate superior training stability, easy convergence, and flexible resolution extrapolation. Additionally, preliminary explorations are conducted about its application as an universal architecture for generating data in a wider range of modality.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed framework integrates many advanced techniques for generative models, with clearly demonstration for their motivations and effectiveness. The experience of synergistically combining these components shared in this paper could provide sufficient value for the community.\\n\\n2. As a versatile architecture, Flag-DiT has achieved impressively generalizable results across multiple modalities.\", \"weaknesses\": \"1. Despite the appealing properties achieved by integrating many existing techniques, its unique innovative contributions lack clear highlighting.\\n\\n2. The quantitative comparisons of the generation for modalities besides image have not been provided.\\n\\n3. The paper claims the scalability of the Flag-DiT architecture in many paragraph, but has not convincingly demonstrate the performance gains obtained from scaling.\\n\\n4. In L326, \\u201cusing using\\u201d is redundant and should be corrected to a single \\u201cusing.\\u201d\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"title\": \"Response to Reviewer X8ux (2/2)\", \"comment\": \"**Q3:** Details on the FSDP sharing strategy.\\n\\nOur basic units for FSDP and checkpointing wrapping are transformer blocks, namely the combinations of one attention layer and one FFN. In general, we have two different design choices, and we choose the one that performs better in each specific setting.\\nThe first choice is using ShardingStrategy.SHARD_GRAD_OP (which is similar to zero2) for FSDP while disabling checkpointing, and using gradient accumulation to reach a reasonable batch size. ShardingStrategy.SHARD_GRAD_OP means the model parameters are not sharded between forward and backward, and not sharded among gradient accumulation iterations. This sharding strategy saves communication at the cost of higher GPU memory cost (especially when the model size is large), and thus the maximum possible batch size would be relatively low, making gradient accumulation usually a must. This design is usually adopted for low-resolution small-model-size settings.\\nWhen training with large models and high resolutions, we usually adopt the second setting, namely using ShardingStrategy.FULL_SHARD (similar to zero3) combined with activation checkpointing. This setting achieves extraordinary memory savings, so the batch size can be relatively high, and gradient accumulation is usually no longer needed.\\nTensor parallel is not leveraged in our implementation because the maximum size of our model is 7B, whereas LLMs like LLaMA usually start to use tensor parallel at the scale of 13B.\\nBy further open-sourcing all training&inference config&code&checkpoint of Lumina-T2X, we hope that our framework can serve as a comprehensive recipe for researchers interested in building diffusion transformers across various fields.\\n\\n**Q4:** Is Lumina resource-limited?\\n\\nYes, our training resources were significantly smaller compared to current state-of-the-art T2I models. For example, SD3 [1] uses over 1B text-image pairs, which is ~100x greater than our models. Besides, our model also uses less than 1/3 training compute of PixArt [2], which is already a training-efficiencient model. Increasing training resources and data would likely continue to enhance the model's performance.\\n\\n[1] Esser, Patrick, et al. \\\"Scaling rectified flow transformers for high-resolution image synthesis.\\\"\\u00a0*Forty-first International Conference on Machine Learning*. 2024.\\n\\n[2] Chen, Junsong, et al. \\\"Pixart-\\\\sigma: Weak-to-strong training of diffusion transformer for 4k text-to-image generation.\\\"\\u00a0*arXiv preprint arXiv:2403.04692*\\u00a0(2024).\"}", "{\"comment\": \"Dear Reviewer LZg1,\\n\\nThank you for taking the time to review our additional results. We are pleased to hear that most of your concerns have been resolved. We greatly appreciate your consideration and are hopeful that our revisions will meet your expectations.\\n\\nPlease let us know if there are any further points you'd like us to address.\\n\\nBest regards,\\n\\nLumina Authors\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thanks to the author for their detailed responses. After reading the other reviewers' comments and author responses, my concerns have been addressed and I will improve my rate to 8.\"}", "{\"title\": \"Response to Reviewer ozQQ\", \"comment\": \"We thank reviewer 4 (ozQQ) for acknowledging the contribution of our paper and providing thoughtful comments. Please see our response to the feedback below.\\n\\n**Q1:** Novelty.\\n\\n**Please refer to the [global reply](https://openreview.net/forum?id=EbWf36quzd&noteId=dxj5QkDALU) for more details.** We acknowledge that our framework incorporates existing techniques. Our objective is to demonstrate their effectiveness in enhancing training stability and efficiency when developing scalable diffusion transformers across various modalities. By leveraging these design choices, we scale the diffusion transformer to 7B parameters and showcase new capabilities, such as RoPE-based resolution extrapolation, high-resolution editing, compositional generation, and style-consistent generation.\\n\\n**Q2:** More comparison with sota methods\\n\\nThank you for your insightful feedback. We would like to clarify that Flag-DiT demonstrates better performance on metrics such as FID and CLIP-Score, surpassing the comparison methods. Additionally, our ablation studies confirm that Flag-DiT achieves significantly faster convergence compared to DiT and SiT of the same size.\\n\\nAs mentioned in our above response, the focus of our paper is on exploring a scalable architecture for DiT, which is compatible with many other methods. For instance, we conducted experiments incorporating MoE into the Flag-DiT architecture. Specifically, we implemented both time-centric and token-centric router versions of DiT-MoE, training them on the ImageNet-256 benchmark for 700k steps. The results, detailed in the table below, show improvements in FID and other metrics compared to the baseline, demonstrating the potential of combining Flag-DiT with more advanced techniques.\\n\\n| | FID | sFID | IS | Precision | Recall |\\n| --- | --- | --- | --- | --- | --- |\\n| Baseline | 2.51 | 4.83 | 242.36 | **0.82** | 0.57 |\\n| + Time MoE | 2.36 | 4.87 | 254.46 | 0.82 | 0.58 |\\n| + Spatial MoE | **2.27** | **4.82** | **261.98** | 0.81 | **0.59** |\\n\\nBesides, since ImageNet serves primarily as a proxy task for generation and conventional metrics may not accurately reflect the generation quality, we have conducted more comprehensive comparative experiments and visualizations on the more complex text-to-image task. Some generated images are visualized in Figure 7-8. Besides, we design an AI preference study to evaluate Lumina-T2I against other text-to-image models, following PixArt [1]. Specifically, we employ GPT-4o, the SoTA multimodal LLM exhibiting strong alignment with human preference, as our evaluator to vote based on image quality and text-image alignment. As shown in the following table, Lumina-T2I demonstrates competitive performance with advanced text-to-image models including PixArt and SD3. Note that SD3 uses over 1B text-image pairs, which is ~100x greater than our models. Besides, our model also uses less than 1/3 training compute of PixArt-Sigma, which is already a training-efficiencient model. However, we have to admit that Lumina-T2I still underperforms these SoTA models in terms of text-image alignment or compositional generation, due to inadequate data and training.\\n\\n| Model | Winrate |\\n| --- | --- |\\n| SD3 | 58.6% |\\n| PixArt | 41.0% |\\n\\nWe also conducted a quantitative evaluation to validate the resolution extrapolation capability of Lumina-T2I. The results indicate that Lumina-T2I, equipped with NTK-aware Scaled RoPE, Time-shifting, and Proportional Attention, exhibits better extrapolation performance compared to PixArt and SD3.\\n\\n| Model | CLIP Score | FID |\\n| --- | --- | --- |\\n| PixArt | 27.18 | 109.65 |\\n| SD3 | 26.73 | 93.78 |\\n| Lumina-T2I | 28.08 | 78.44 |\\n\\n[1] Chen, Junsong, et al. \\\"Pixart-\\\\sigma: Weak-to-strong training of diffusion transformer for 4k text-to-image generation.\\\"\\u00a0*arXiv preprint arXiv:2403.04692*\\u00a0(2024).\"}", "{\"summary\": \"The authors introduce Lumina-T2X a framework for generation of image, video, 3D or audio from text instructions. This work introduces Flag-DiT which is a flow based diffusion transformer architecture which allows for generation of various modalities under varying aspect ratios and resolutions. Additional techniques such as RoPE, KQ-Norm and flow matching to allow for scalability. Specific details are provided for translation into each of the modalities and state of the art quantitative performance is demonstrated compared to recent baselines.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"The paper introduces a general generative framework from text to a number of different types of modalities. The main strengths include :\\n1. **Reproducibility**: The paper has an in depth treatment of most of the implementation details and design choices aiding in easy reproduction of the pipeline. Furthermore, the associated code has been provided (and made open source) further accelerating the ability to use the framework and build upon it. \\n2. **Quality**: The paper is well written with adequate motivation for each design element used and experiments justifying the need for each. The authors do a good job of highlighting the key innovations that were important to achieve state of the art performance. These being the use of *RoPE*, *RMSNorm and KQ-Norm* for scalable training, *flow matching* for improved training dynamics, *learnable tokens* for any resolution generation and *carefully curated data* for improved image quality. \\n3. **Comparison**: Comparisons are provided against a number of different baselines under matched setting to demonstrate improved performance on the lablel conditioned imagenet generation task. \\n5. **Result quality**: The image quality demonstrated in the paper is impressive and the ability to generate at variable resolution and aspect ratios in a unified model is very useful. \\n6. **Appendix**: The appendix provides a lot of additional insights and useful empirical findings that is helpful to research in multimodal generation.\\n6. **Advanced applications:** A number of training free applications like resolution extrapolation and style consistent generation have been demonstrated which highlights the capability of the framework. \\n6. **Any resolution generation**: The idea of using learnable `[nextframe]` and `[nextline]` tokens is elegant and simple to incorporate and is potentially useful in a wide variety of token based generative models allowing us to equip these models with the ability to generate at any resolution/ aspect ratio.\", \"weaknesses\": \"1. **Novelty** : Although not a very strict weakness, the novelty is somewhat limited by the fact that most of the additional components used in this framework (such as RoPE, RMSNorm, KQ-Norm) have been shown to be effective in previous. Nonetheless, there is some merit in trying to arrive at the optimal combination of such techniques to allow for scalable and efficient generation.\\n3. **Video Quality**: Although impressive qualitative results have been shown in the paper for image generation. The videos shared in the supplementary seems to have limited quality. Particularly at high resolution, it seems to generate frames that are super saturated (such as demo number 4 in the supplm). Providing insights about video generation performance at high resolution and potential limitations would help better understand the difference in video vs image quality. In particular, are there specific challenges in video generation that makes it harder/ quality lower than just single frame generation? \\n4. **Additional Quantitative metrics:** Although the authors have provided extensive quantitative comparisons and ablations, the work would benefit from a user study on generated quality (since FID is mostly a proxy metric). Furthermore, include analysis about prompt adherence for the images generated by different approaches would be insightful .\\n5. **Audio performance**: Provide some qualitative examples of generated audio in the supplementary materials would help evaluate the performance of the Text-to-Audio mode of the model. In particular, similar to the video and 3D demons in the supplementary, providing a set of generated audio samples for diverse kinds of text prompts would help highlight T2Audio performance of this approach.\", \"questions\": \"1. At what frame rates are the video generated?\\n3. l232-236 : The authors mention that zero-initialized attention induces sparsity in the text conditioning. Is the implication that only 10% of the prompt is being adhered to? Or that only 10% of the prompt embeddings are necessary to generate the image? In particular are there performance benefits (either speed or quality) due to this induced sparsity? Any experiments demonstrating this claim and its associated benefits would be very helpful.\\n4. The authors mention that videos can be generated upto 128 frames (for a 128K token context window), given that the framework supports any resolution generation, what is the resolution for this generation? (does 1000 token per frame imply a 32*32 latent image?) . Can we generate a lager number of frames at a smaller resolution? In particular, a small graph/ table demonstrating the quality difference as a function of number of frames vs resolution would be helpful. Also, additional insights about how the resolution vs num of frames for the same token budget affects the different T2X modes would provide some value in understanding how to work with different modalities.\", \"flag_for_ethics_review\": \"['Yes, Potentially harmful insights, methodologies and applications']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I'm happy to see the additional results provided by the authors. Now most of my concerns have been resolved. I will take into active consideration for further increasing the score.\"}", "{\"comment\": \"Thanks for the author's response. Most of my concerns have been addressed. However, the \\u201cquantitative comparisons for other modalities\\u201d in W2 actually refer to comparisons with other state-of-the-art methods in these tasks, as readers may lack background knowledge in the generation of other modalities such as audio.\\nOverall, I agree with the contributions and their value claimed by the author. Therefore, I will keep my initial positive recommendation.\"}", "{\"summary\": \"1. The authors propose Lumina-T2X, a family of Flow-based Large Diffusion Transformers (Flag-DiT) designed for efficient and scalable training. It seamlessly unifies representations across diverse modalities and varying spatial-temporal resolutions.\\n2. Flag-DiT improves scalability, stability, and flexibility over the original DiT. And it includes four main components, which are frame-wise encoding for different modalities, text encoding with diverse text encoders, input & target construction and network architecture and loss.\\n3. Lumina-T2X can be used for text-to-image generation, resolution extrapolation and other visual tasks such as style-consistent generation, high-resolution image editing, and compositional generation. And it can achieve these tasks in a tuning-free manner, uniformly tackling these tasks through token operations.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. Flag-DiT enhances the scalability, stability, and flexibility of the original DiT.\\n2. Lumina-T2X can be utilized in multiple applications in a tuning-free manner. The visual quliaty of generated images looks excellent. And it can achieve generative modeling with rapid convergence, robust scalability, and powerful high-resolution capabilities.\\n3. Lumina-T2X seamlessly unifies the representations of different modalities across various spatial-temporal resolutions.\", \"weaknesses\": \"1. Flag-DiT lacks novelty, seems to be a patchwork of techniques borrowed from other methods to address the issues in DiT. For example, RoPE, RMSNorm, KQ-Norm and the flow matching formulation are the technique proposed and used in other works. Please clarify what you consider to be the novel contributions of your work beyond combining existing techniques OR explain the innovation in combining these techniques.\\n2. Some metrics of Flag-DiT shown in Table.1 do not seem very impressive compared with other state-of-the-art methods. It is clear that the results of some comparison methods are noticeably better.\\n3. The anthors should provide some visual comparisons of generated images with other SOTA methods to further demonstrate the superiority of Lumina-T2X. The authors can select several methods from Table.1 to perform visual comparisons.\", \"questions\": \"1. Did the authors make any modifications on RoPE, RMSNorm, KQ-Norm and the flow matching formulation? Please clarify what you consider to be the novel contributions of your work beyond combining existing techniques OR explain the innovation in combining these techniques.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer LZg1\", \"comment\": \"We thank reviewer 5 (LZg1) for acknowledging the contribution of our paper and providing thoughtful comments. Please see our response to the feedback below.\\n\\n**Q1:** Novelty.\\n\\n**Please refer to the [global reply](https://openreview.net/forum?id=EbWf36quzd&noteId=dxj5QkDALU) for more details.** We acknowledge that our framework incorporates existing techniques. Our objective is to demonstrate their effectiveness in enhancing training stability and efficiency when developing scalable diffusion transformers across various modalities. By leveraging these design choices, we scale the diffusion transformer to 7B parameters and showcase new capabilities, such as RoPE-based resolution extrapolation, high-resolution editing, compositional generation, and style-consistent generation.\\n\\n**Q2:** Evaluation on more modalities.\\n\\nThank you for your suggestion. As clarified in our contribution above, our experiments on other modalities are extensions of the Lumina architecture, so we provided preliminary exploration in the appendix. Additionally, we included quantitative results for audio in Table 5 of the appendix, and further quantitative results for text-to-image generation are provided in [our response](https://openreview.net/forum?id=EbWf36quzd&noteId=F701KpwbPy) to Reviewer 4 (ozQQ).\\n\\n**Q2:** Scalability.\\n\\nInitially, we found that naive scaling the Diffusion Transformer (such as DiT and SiT) failed due to training instability. To address this, we introduced a series of architectural modifications that successfully enabled the stable training of 600M-7B parameter Diffusion Transformers across various modalities. Besides, our ablation studies on ImageNet (Figure 3) also demonstrate the effectiveness of scaling diffusion transformers.\\n\\n**Q3:** Typo.\\n\\nThank you for pointing out this typo. We have corrected it.\"}" ] }
EbOhZyxIzQ
Overcoming Knowledge Barriers: Online Imitation Learning from Visual Observation with Pretrained World Models
[ "Xingyuan Zhang", "Philip Becker-Ehmck", "Patrick van der Smagt", "Maximilian Karl" ]
Pretraining and finetuning models has become increasingly popular in decision-making. But there are still serious impediments in Imitation Learning from Observation (ILfO) with pretrained models. This study identifies two primary obstacles: the Embodiment Knowledge Barrier (EKB) and the Demonstration Knowledge Barrier (DKB). The EKB emerges due to the pretrained models' limitations in handling novel observations, which leads to inaccurate action inference. Conversely, the DKB stems from the reliance on limited demonstration datasets, restricting the model's adaptability across diverse scenarios. We propose separate solutions to overcome each barrier and apply them to Action Inference by Maximising Evidence (AIME), a state-of-the-art algorithm. This new algorithm, AIME-NoB, integrates online interactions and a data-driven regulariser to mitigate the EKB. Additionally, it uses a surrogate reward function to broaden the policy's supported states, addressing the DKB. Our experiments on vision-based control tasks from the DeepMind Control Suite and MetaWorld benchmarks show that AIME-NoB significantly improves sample efficiency and converged performance, presenting a robust framework for overcoming the challenges in ILfO with pretrained models.
[ "World Models", "Foundation Models", "Pretraining", "Imitation Learning from Observation", "Decision-making" ]
Reject
https://openreview.net/pdf?id=EbOhZyxIzQ
https://openreview.net/forum?id=EbOhZyxIzQ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zY4zizDoVY", "skHYEp36Zz", "sZYdAEKU6E", "ngGvPIpRJO", "neb64SWm4I", "lr0FPmIRGe", "lDo9JoxBxm", "l3c6ZSCLxv", "hWXAdCLDzw", "fnyWcUDS3o", "fmmfzmi3xh", "e1DeeijDhs", "dh62uEWxWD", "dTKw3n3pi7", "dE7zxsoPbD", "XaoIPn7XXT", "V4BgFjTope", "U8qyRUxPq7", "TuU1Sfgihz", "NGtiXQcyUu", "KWZ0jzcrUp", "GRMe86Uhlx", "CwOXAG2kxI", "BGaHEJWDU8", "BBcvcij2js", "25hxIuFEbd" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732191083367, 1732258527342, 1730937186220, 1732735126055, 1732735606114, 1731683010939, 1734647303966, 1733000784958, 1732738067260, 1733000735177, 1733109446096, 1731683209246, 1731683132135, 1729976800355, 1731683461206, 1733313098069, 1732725749848, 1732540900073, 1731683746829, 1730693464479, 1732494395763, 1730514180971, 1737523684644, 1732494366118, 1732480720098, 1731683797071 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5107/Authors" ], [ "ICLR.cc/2025/Conference/Submission5107/Reviewer_SkmK" ], [ "ICLR.cc/2025/Conference/Submission5107/Reviewer_Yddj" ], [ "ICLR.cc/2025/Conference/Submission5107/Reviewer_SkmK" ], [ "ICLR.cc/2025/Conference/Submission5107/Reviewer_vbjw" ], [ "ICLR.cc/2025/Conference/Submission5107/Authors" ], [ "ICLR.cc/2025/Conference/Submission5107/Area_Chair_8rRR" ], [ "ICLR.cc/2025/Conference/Submission5107/Authors" ], [ "ICLR.cc/2025/Conference/Submission5107/Reviewer_SkmK" ], [ "ICLR.cc/2025/Conference/Submission5107/Authors" ], [ "ICLR.cc/2025/Conference/Submission5107/Reviewer_Yddj" ], [ "ICLR.cc/2025/Conference/Submission5107/Authors" ], [ "ICLR.cc/2025/Conference/Submission5107/Authors" ], [ "ICLR.cc/2025/Conference/Submission5107/Reviewer_vbjw" ], [ "ICLR.cc/2025/Conference/Submission5107/Authors" ], [ "ICLR.cc/2025/Conference/Submission5107/Authors" ], [ "ICLR.cc/2025/Conference/Submission5107/Authors" ], [ "ICLR.cc/2025/Conference/Submission5107/Authors" ], [ "ICLR.cc/2025/Conference/Submission5107/Authors" ], [ "ICLR.cc/2025/Conference/Submission5107/Reviewer_4pZy" ], [ "ICLR.cc/2025/Conference/Submission5107/Reviewer_SkmK" ], [ "ICLR.cc/2025/Conference/Submission5107/Reviewer_SkmK" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5107/Reviewer_SkmK" ], [ "ICLR.cc/2025/Conference/Submission5107/Reviewer_vbjw" ], [ "ICLR.cc/2025/Conference/Submission5107/Authors" ] ], "structured_content_str": [ "{\"title\": \"Global response\", \"comment\": [\"We sincerely thank all reviewers for their time and constructive feedback. We are pleased to note that the reviewers consistently acknowledge several key strengths of our paper:\", \"The clear and accessible writing style (Reviewers Yddj, SkmK, vbjw)\", \"The comprehensive experimental evaluation (Reviewers Yddj, SkmK, vbjw)\", \"The significant performance improvements achieved by AIME-NoB compared to existing methods (All reviewers)\", \"In response to the reviewers' suggestions, we have made the following major enhancements to the paper:\", \"1. Theoretical Foundation:\", \"As suggested by Reviewer vbjw, we have provided formal definitions of the EKB and DKB concepts in Appendix K, strengthening the theoretical underpinning of our work.\", \"2. Extended Experimental Validation:\", \"Following recommendations from Reviewers Yddj, SkmK, and vbjw, we have included additional experiments in Appendix L that demonstrate: a) AIME-NoB's capability to make progress on and potentially solve complex tasks like Humanoid - a first in ILfO to the best of our knowledge b) more sample efficiency compared to Dreamer, a state-of-the-art model-based RL algorithm, even when Dreamer has access to true environmental rewards\", \"3. Additional Improvements:\", \"Soften some claims based on Reviewer SkmK's suggestions\", \"Expanded citation coverage as recommended by Reviewer vbjw\", \"All modifications are highlighted in blue in the revised manuscript. We believe these enhancements substantially strengthen our paper by providing stronger theoretical foundations and broader experimental validation. Detailed responses to individual reviewer comments are provided below. We kindly request reviewers to consider raising their scores if our responses adequately address their concerns.\"]}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for the thorough response, I appreciate it. I believe that my main concerns have been addressed. Given that I already lean towards acceptance, I will wait for other reviewers -- 4pZy and vbjw -- to engage before making my final judgement.\"}", "{\"summary\": \"This paper studies two barriers in imitation learning from observations, namely the Embodiment Knowledge Barrier (EKB) and Demonstration Knowledge Barrier (DKB). EKB refers to the gap when generalizing to new observations and DKB refers to the gap when generalizing from limited number of expert demonstrations.\\n\\nIt proposes to use online interaction to reduce EKB and introduces a weighted loss between new interactions and pre=training data. For DKB, it proposes to use a surrogate reward, such as a discriminator in AIL.\\n\\nExperimental results suggest AIME-NoB outperforms baselines with significant margin in DMC and MetaWorld.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Clearly identify the shortcomings of existing IFO methods and propose reasonable solutions to reduce the gaps.\", \"Extensive experiments on the different design choices, e.g. reward functions, and comparisons with baselines demonstrate solid improvement.\", \"Paper writing is clear and easy to follow.\"], \"weaknesses\": [\"The ideas to reduce EKB and DKB seems a simple combination of previous methods, such as using online interaction with weighted sampling, adversarial training, etc. It's unclear the distinctions from previous methods (such as AIL, e.g.) is significant enough.\", \"These environments aren't hard to come up with a hand-designed reward or learning a surrogate reward. Lack of comparisons with Dreamer-like methods using hand-designed or surrogate reward.\"], \"questions\": [\"How do you differentiate from previous work that uses online interactions and surrogate rewards?\", \"How does it compare with RL with some hand-designed reward or a learned surrogate reward?\", \"Does the method generalize to real-world environments, such as learning from videos?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Hi reviewer vbjw,\\n\\nDoes the authors' response address your concerns?\"}", "{\"comment\": \"Like I mentioned in the previous comment, my concerns about EKB/DKB definitions and the experiments are addressed. The other two remain. My score is a 5.\"}", "{\"comment\": \"We would like to thank for your time to review our paper and giving constructive feedback. Please find our replies to your concerns below.\\n\\n> How do you differentiate from previous work that uses online interactions and surrogate rewards?\\n\\nThe key innovation of AIME-NoB compared to previous online ILfO approaches like PatchAIL and OT is its ability to leverage pre-trained world models. While previous methods require training models from scratch, AIME-NoB can effectively utilize models pretrained on large datasets. As demonstrated in Figure 6(c), this capability is crucial for AIME-NoB's performance, showing much better sample-efficiency over training from scratch. This aligns with the broader paradigm shift in AI toward pre-training and fine-tuning approaches, which have proven highly effective across multiple domains.\\n\\n> How does it compare with RL with some hand-designed reward or a learned surrogate reward?\\n\\nFirst we would like to point out that one of the most important motivations to study ILfO is that we can remove the needs of reward. The reward engineering is a notorious problem in the RL field, which prevents RL from scaling to hundreds or even thousands of different tasks. So in our setup, we deliberately remove the reward to make the algorithm more applicable in a broader scope of problems.\\n\\nTo address your question directly, we compare AIME-NoB with Dreamer with the hand-designed reward from the environments. The results are shown in Appendix J of the revised paper. \\n\\nFor DMC, as we can see from the results in Figure 17, AIME-NoB achieve better sample efficiency on 8 out of 9 tasks excepted the problematic environment cartpole-swingup as we shown in Appendix J. On hopper-hop and quadruped-run, AIME-NoB is surpassed by Dreamer in the end. This is because as an imitation learning algorithm, AIME-NoB's performance is limited by the quality of the expert.\\n\\nFor the MetaWorld, since DreamerV3 is not officially benchmarked on it, we have to produce the results ourselves. we report the results at Figure 18. As the manipulation tasks typically impose challenges for exploration, we see that Dreamer with or without the help of the pretrained model struggles to accomplish the task within the 500k environment steps. In the same time, with the help of the demonstrations, it is much easier for AIME-NoB to explore the related regions in the observation which results in a better sample efficient. This marks an advantage of using demonstrations over using a scalar reward to define the task.\\n\\n> Does the method generalize to real-world environments, such as learning from videos?\\n\\nIn this paper, we run all our experiments on image observations, i.e. videos as demonstrations. But indeed, we only run our experiments on the simulators. We believe AIME-NoB is well-suited for real-world applications, given its ability to leverage pre-trained models is particularly valuable for real-world scenarios where data collection is costly. Do you have some suggested real-world environments that we can test on? Would these align with your expectations for real-world validation?\"}", "{\"metareview\": [\"(a) The paper introduces AIME-NoB, an algorithm designed to improve Imitation Learning from Observation (ILfO) using pre-trained world models. The authors define two barriers in imitation learning from observation (ILfO) methods: the Embodiment Knowledge Barrier (EKB) and the Demonstration Knowledge Barrier (DKB). The EKB describes the limitation of a pretrained model when confronted\", \"with novel observations and actions beyond its training experience. The DKB describes the generalization from a limited number of expert demonstrations in imitation learning. The proposed approaches to address these limitations are: (1) balance the offline and online datasets when updating the model, (2) combine AIME and AIL objectives.\", \"(b) Strengths:\", \"Problem Definition: The paper identifies and analyzes the EKB and DKB limitations in current ILfO methods using pretrained models. The clarity improved during rebuttal.\", \"Effective Solutions: The authors propose practical solutions for each knowledge barrier that show significant performance improvements.\", \"Ablation Studies: The authors perform thorough ablation studies and sensitivity tests to analyze the contribution of different components, such as the surrogate reward and data regularizer, to the overall performance.\", \"Clear Presentation: Multiple reviewers praised the paper's clear and accessible writing style, which helped in understanding the proposed approach. The use of figures (e.g. Figure 1) was also cited as being helpful.\", \"Strong results on DMC and meta-world.\", \"(c) Weaknesses:\", \"Limited novelty: Several reviewers questioned the novelty of the proposed approach. The core method combines online interaction, a data-driven regularizer, and a surrogate reward, all of which have been previously used in some form.\", \"Incremental Contribution: Some reviewers felt that combining existing techniques, although effective, was closer to an implementation choice than a significant technical contribution.\", \"Realistic impact: The reviewers questioned the direct applicability to real-world robotics problems since actions are typically available, or can be estimated in robotics, or real-world videos often have a large domain shift.\", \"(d) My decision is to reject the paper.\", \"I think the paper does a good job at motivating the design choices above, and has thorough experiments with strong results, especially with the new additions during the rebuttal, which show that the design choices significantly improve over prior methods. However, in terms of whether this paper makes significant contributions at least in (1) enough novel contributions in method or, (2) significant realistic impact. As the reviewers pointed out, it seems to be a weaker case here.\"], \"additional_comments_on_reviewer_discussion\": \"In general the reviewers like the proposal of defining EKB and DKB.\\n\\nSome reviewers questioned the methodological contribution and lack of formal definitions of EKB and DKB. These were partially addressed during rebuttal, especially with the formal definition of EKB and DKB being proposed. At the end, Reviewer vbjw maintained that the methodological contribution was minor but acknowledged it may be a difference in opinion. Reviewer Yddj acknowledged the clarification of the use of pre-trained world models, but still viewed the method as a simple combination of previous work. \\n\\nThere were some discussions around experimental validation, regarding having more challenging tasks and why the method failed on Cartpole Swingup, which they would have expected to be the easiest task. During rebuttal, the authors provided humanoid experiments in DMC and additional tasks in MetaWorld. explained why the method failed on the Cartpole task; compared their method with Dreamer.\\n\\nReviewer vbjw questioned the connection of the method to a real-world problem. They pointed out that in robotics, actions are often available or can be closely estimated, and that real-world videos often present a significant domain shift. The reviewer still remains concerned. \\n\\nIn summary, the authors made several revisions to the paper to address the reviewers' concerns, including adding formal definitions, conducting more challenging experiments, and clarifying their claims. Some reviewers remained skeptical about the novelty of the method, and applicability to the real world.\"}", "{\"title\": \"kind reminder\", \"comment\": \"Dear reviewer 4pZy,\\n\\nAs the discussion period concludes on December 2, we wanted to kindly remind you to read and reply to our rebuttal and the revised paper. We welcome you to share any remaining concerns or feedback. If our responses have adequately addressed your comments, we would greatly appreciate it if you could update your score accordingly.\"}", "{\"comment\": \"Thank you for the clarification!\"}", "{\"title\": \"kind reminder\", \"comment\": \"Dear reviewer Yddj,\\n\\nAs the discussion period concludes on December 2, we wanted to kindly remind you to read and reply to our rebuttal and the revised paper. We welcome you to share any remaining concerns or feedback. If our responses have adequately addressed your comments, we would greatly appreciate it if you could update your score accordingly.\"}", "{\"comment\": \"Thank you for your response! I appreciate the comparison with Dreamer that shows the advantage of demonstration over rewards. The author clarifies the major differences to other ILfO work is the usage of pre-trained world model. However, I agree with other reviewers that this seems a simple combination of previous work. I will keep my score.\"}", "{\"comment\": \"We would like to thank for your time to review our paper and giving constructive feedback. Please find our replies to your concerns below.\\n\\n> I appreciate that the authors conduct experiments on both tasks from DMControl and Meta-World, and consider tasks with varying difficulty in the case of Meta-World. However, I would have liked to see a few examples of tasks that are more challenging, i.e., where the proposed method (along with baselines) struggle a bit more. For DMControl this could be e.g. any of the Humanoid or Dog tasks, and for Meta-World this could be e.g. Stick Pull / Push or Pick Place (Shelf).\\n\\nThanks for your suggestions of the harder tasks. As from previous benchmark results, baselines BCO($\\\\alpha$), OT and PatchAIL are not well-performed, so we are not expecting them to be good on these even harder tasks. Thus, we currently mainly compare between AIME-NoB with AIME. If time permitted during the rebuttal phase, we will add these results later. All the results are available in Appendix L of the revised paper.\\n\\nFor MetaWorld, we include four additional tasks, namely pick-place, shelf-place, stick-pull and stick-push. The results are shown in Figure 15. From the results, AIME-NoB reliably outperforms AIME. \\nEven in the very hard tasks stick-pull and stick-push, AIME-NoB manages around 60% success rates. However, AIME-NoB doesn't performs so well on pick-place and shelf-place. We conjecture it is due to the visual difficulties of the tasks. It is known that the world models based on reconstruction loss struggle at modeling small objects, which is the small cube we need to pick-up in these two tasks. Improving the world models' ability of modeling small objects or increase the resolutions of the observations will likely improve the performance. \\n\\nFor DMC, we conduct additional experiments on humanoid-stand, and show the results in Figure 16. Results show AIME-NoB's capability to make progress on and potentially solve complex tasks like Humanoid. To the best of our knowledge, this is the first time that an ILfO algorithm show progress on vision-based humanoid task.\\n\\n> I find that some of the claims / conclusions are a bit exaggerated relative to what the experimental results show. For example, the authors claim that \\\" the model pretrained on MW-mt50 offers much better results\\\" (L424) while the results in Figure 4(c) show a fairly small difference between the two curves (absolute ~15% increase in success rate on avg it seems). I would prefer claims to accurately reflect the evidence.\\n\\nWe thank you for this careful observation. We agree that our language should more precisely reflect the quantitative results. We have revised the paper accordingly, with changes highlighted in blue to better align our claims with the experimental evidence.\\n\\n> Figure 10 indicates that the proposed method succeeds in all of the considered tasks except Cartpole Swingup. I would have expected this task to be the easiest of them all; can the authors please comment on why the method fails on this particular task?\\n\\nThanks for the question. We were also very surprised by the poor result on CartPole during experiments. Thus, we have conducted a separate analysis on the CartPole at Appendix J. In a nutshell, we think it is a problem of the environment design -- the camera position cannot cover the entire moving space of the CartPole which makes the most important behaviour, the swingup, not visible from the image observation. This makes the AIME loss less effective and even harmful. Thus, we tune the hyper-parameters to lower the effect of the AIME loss and we have shown in this way AIME-NoB can solve CartPole swingup. Please refer to Appendix J for more details.\"}", "{\"comment\": \"Thanks for spending your time to offer feedback to our paper. We address your concerns below:\\n\\n> The logic is confusing. The authors state that in order to alleviate EKB, they additionally use online interactions. It is obvious that online trajectories will bring more benefits since the behavior policy to collect offline datasets differs from the current policy. So the improvement compared to AIME might be largely caused by the setting difference rather than algorithm improvement.\", \"we_appreciate_this_concern_but_respectfully_disagree_for_several_reasons\": \"1. **Novel Setting as Contribution**: Extending algorithms to new settings with demonstrated improvements is a recognized contribution in reinforcement learning research. For instance, DrQ [1] extended SAC [2] to handle image inputs, while Cal-QL [3] adapted CQL [4] to online settings. Both works were considered significant contributions to the field.\\n2. **Non-trivial Algorithm Design**: While online trajectories may intuitively seem beneficial, proper algorithm design is crucial for realizing these benefits. Our experiments demonstrate this: the BCO(\\u03b1) baseline, which also utilizes online data for fine-tuning, shows no improvement or even degraded performance (Figure 2). This highlights that simply having access to online data is insufficient without proper algorithmic design.\\n3. **Comprehensive Benchmarking**: While we compare against AIME to demonstrate resolution of knowledge barriers, our evaluation extends beyond this single comparison. AIME-NoB significantly outperforms state-of-the-art online ILfO algorithms like PatchAIL and OT, validating our approach's effectiveness.\\n4. **Fundamental Solution to EKB**: We contend that online interaction is the most scalable solution to EKB. This mirrors human learning - many complex skills (e.g., athletic movements) cannot be mastered through observation alone but require practical experience. While performance improvements in purely offline settings are possible through specialized architectures or regularization, such knowledge engineering approaches are inherently less scalable than allowing online interaction.\\n\\n> The setting is very complicated. To my understanding, AIME's setting includes an offline dataset\\u00a0($s_{off}$,$a_{off}$)\\u00a0for world model training, an expert dataset\\u00a0($s_{expert}$)\\u00a0for imitation learning. And AIME-NoB additionally assumes access to the environment to collect an online dataset\\u00a0($s_{on}$,$a_{on}$). The complexity of the setting makes it hard to make fair experimental comparisons as baselines might only assume access to a part of the datasets.\", \"we_acknowledge_this_complexity_but_would_like_to_contextualize_it\": \"1. The apparent complexity largely stems from the current state of the field, where pre-trained world models are not yet readily available. Once such models become standard resources, the training pipeline will be significantly streamlined, as the current state in NLP and CV.\\n2. Our setting is fundamentally comparable to standard online RL in terms of required components - the expert dataset effectively replaces the reward function in traditional RL. The additional offline dataset aligns with the growing paradigm of pre-training and fine-tuning in machine learning.\\n3. While some comparisons (e.g., with PatchAIL and OT) may not utilize all available data, we argue that the ability to leverage diverse data sources is increasingly important in modern machine learning. Our method's ability to effectively utilize multiple data sources should be viewed as a feature rather than a limitation.\\n\\n[1] Yarats *et al.*, Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels, ICLR 2021 Spotlight\\n\\n[2] Haarnoja *et al.*, Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor, ICML 2018\\n\\n[3] Nakamoto *et al.*, Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning, NeurIPS 2023\\n\\n[4] Kumar *et al.*, Conservative Q-Learning for Offline Reinforcement Learning, NeurIPS 2020\"}", "{\"summary\": \"**Problem setting**: have access to a demonstration data set without actions, an offline data set of suboptimal experience, and the ability to sample experience online without reward feedback\\n\\n**Proposed approach**: (1) balance the offline and online datasets when updating the model, (2) combine AIME and AIL objectives (though also considers other alternatives to AIL)\\n\\n**Experiments**: comparisons to relevant prior methods on DMC and MetaWorld tasks with visual observations\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Writing is fairly straightforward to understand\", \"Results show clear improvements over relevant methods\", \"The paper includes fairly extensive ablations/empirical analysis\"], \"weaknesses\": [\"Overall, the technical contribution seems like a fairly basic combination of prior works. While many methods build upon prior works, this particular combination seems closer to an implementation choice than a significant technical contribution. The method is also more complex than prior methods as it introduces one hyperparameters and combines multiple objectives.\", \"Other weaknesses\", \"It\\u2019s a bit unclear how this connects to a real-world problem. In robotics, you typically either collected data from a robot, where you can get actions or something close to them, or videos of humans/animals, which lacks actions but also has significant domain shift. Connecting the problem statement to a real world problem would be helpful to motivate the paper\\u2019s significance.\", \"EKB and DKB are defined quite informally and then are used extensively, sometimes in a hand-wavy way. I\\u2019m not sure if introducing them is helpful for understanding. I think it would be better to either remove them or define them more formally and measure the extent to which they contribute to poor performance.\", \"I appreciate that the experiments are on visual observations, though they would be stronger if they included more complex tasks such as dextrous tasks, longer horizon tasks, or tasks with greater diversity/generalization\", \"Some relevant related works to consider citing/discussing:\", \"Mobile (https://arxiv.org/abs/2102.10769), though uses low-dim observations\", \"VMAIL (https://arxiv.org/abs/2107.08829), though assumes actions in demos\"], \"questions\": \"See suggestions in the weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal (Part 1 / 3)\", \"comment\": \"We would like to thank for your time to review our paper and giving constructive feedback. Please find our replies to your concerns below.\\n\\n> Overall, the technical contribution seems like a fairly basic combination of prior works. While many methods build upon prior works, this particular combination seems closer to an implementation choice than a significant technical contribution. The method is also more complex than prior methods as it introduces one hyperparameters and combines multiple objectives.\", \"we_respectfully_disagree_with_the_characterization_of_aime_nob_as_merely_an_implementation_variant_of_aime_for_several_reasons\": \"1. **Principled Design**: Our combination of techniques is specifically motivated by the analysis of the Embodiment Knowledge Barrier (EKB) and the Demonstration Knowledge Barrier (DKB), as acknowledged by reviewers Yddj and SkmK. This makes AIME-NoB a principled solution rather than an arbitrary combination of prior works.\\n2. **Novel Integration**: We are the first to integrate these techniques with pretrained world models for Imitation Learning from Observation (ILfO). This integration required significant adaptations, such as leveraging the pretrained image encoder's embeddings for discriminator training, which substantially stabilizes the adversarial training process.\", \"regarding_complexity\": [\"While we do introduce new hyperparameters (replay ratio $\\\\alpha$ and value gradient loss weight $\\\\beta$), these additions are justified by their clear purposes.\", \"We provide comprehensive ablation studies and analysis that illuminate their roles and practical tuning guidelines.\", \"> It\\u2019s a bit unclear how this connects to a real-world problem. In robotics, you typically either collected data from a robot, where you can get actions or something close to them, or videos of humans/animals, which lacks actions but also has significant domain shift. Connecting the problem statement to a real world problem would be helpful to motivate the paper\\u2019s significance.\"], \"our_work_has_both_immediate_and_long_term_practical_relevance\": \"1. **Current Applications**:\\n - Action Space Mismatch: Even when robot data includes actions, the action space during data collection may differ from deployment. For example, open-x datasets recorded in end-effector Cartesian space may need to be applied to systems with torque-space controllers.\\n - Manual Demonstrations: When demonstrations are collected by physically moving the robot's end-effector, no explicit action information is available, necessitating observation-only learning.\\n2. **Path Toward Cross-Embodiment Learning**: While our primary focus is on efficient imitation learning, we acknowledge that the ultimate goal is to learn from human or animal demonstrations that have different embodiments from our robots. Our work represents an important step in this direction, complementing parallel research exploring the cross-embodiment aspect [1]. By integrating AIME-NoB with such cross-embodiment methods in future work, we aim to address the complete challenge of efficient learning from demonstrations across diverse embodiments.\\n\\n> Some relevant related works to consider citing/discussing\\n\\nThanks for your suggestions of the related works. We have cited them in the revised version.\\n\\n[1] Mazzaglia *et al.*, GenRL: Multimodal-foundation world models for generalization in embodied agents, NeurIPS 2024\"}", "{\"title\": \"summary of the discussion phase\", \"comment\": \"Dear reviewers and AC,\\n\\nWe would like to thank you again to spend time to review our paper and join the discussion phase. As the discussion phase comes to a close, we would like to summarize the key points to facilitate the decision-making process.\", \"reviewers_generally_agree_on_the_strength_of_the_paper\": [\"The clear and accessible writing style (Reviewers Yddj, SkmK, vbjw)\", \"The extensive experimental evaluation (Reviewers Yddj, SkmK, vbjw)\", \"The significant performance improvements achieved by AIME-NoB compared to existing methods (All reviewers)\", \"During the rebuttal, three new contents have been added:\", \"The formal definition of EKB and DKB, as requested by reviewer vbjw.\", \"Experiments on more challenging tasks, e.g. Humanoid, as requested by reviewer SkmK and reviewer vbjw.\", \"Experiments to compare AIME-NoB with Dreamer, as requested by reviewer Yddj.\", \"These additions were acknowledged by the reviewers as helpful and further strengthened the paper.\"], \"after_the_rebuttal\": [\"While maintaining the score, reviewer SkmK has no concerns and lean toward acceptance.\", \"Reviewer Yddj only has concerns regarding the limit novelty of the method, but also lean toward acceptance.\", \"Reviewer vbjw has two remaining concerns: 1) the limit novelty of the method, but also acknowledged as a subjective opinion; 2) the direct applicability on robotics problem. Although we offer some examples of use cases, they did not changes the reviewer's mind.\", \"Reviewer 4pZy has a contradictory opinion with all other reviewers, concerning the clarity of the paper while the paper is acknowledged by all the other reviewers to be well-written and easy to follow. Although we have submitted the rebuttal **18 days** ago to clarify the misunderstandings, unfortunately we didn't get any response from reviewer 4pZy in the entire discussion phase.\", \"We still believe with the clear intuition and the strong performance, the paper offers valuable insights to the field.\", \"We hope this summary assists in the decision-making process. Thank you once again for your time and efforts in reviewing and discussing our work.\"]}", "{\"title\": \"New Revision\", \"comment\": \"We would like to extend our gratitude for all the constructive reviews, with special thanks to reviewers SkmK and vbjw for their participation in the discussion. We have updated the revised paper to a new version, incorporating the following changes:\\n\\n- Moved the formal definitions of EKB and DKB to the main text, as suggested by reviewer vbjw.\\n- Added more results on the humanoid experiment, including additional results on humanoid-walk and humanoid-run and more seeds.\\n\\nWe believe these revisions enhance the clarity of the paper and better demonstrate the potential of AIME-NoB in complex tasks. We kindly request reviewers, especially Yddj and 4pZy, to engage in the discussion and raise any additional questions if something remains unclear. We also ask you to consider raising your scores if our responses have adequately addressed your concerns.\"}", "{\"comment\": [\"Thank you for your thoughtful feedback. We appreciate your recognition of our improvement and raise the score. We would like to address your remaining questions:\", \"**Re real world use cases**:\", \"We apologize if our example of EEF to joint torque action space conversion appeared ambitious. The core point is the action space may change from the demonstration to the local setup for imitation. Still taking open-x as an example, most datasets are collected under EEF position action space. Thus, it will create a mismatch if we want to imitate a policy in joint velocity space, which is favourable for RL [2]. We would like to further point out that even if the high-level action space is the same as the EEF position, the low-level controller can also be different. Thus, even if the policy perfectly imitates the action sequence in the dataset, we can still result in different outcomes. Our approach can theoretically handle both of these cases.\", \"Regarding the second point, in ILfO the goal is to imitate the expert's observation, i.e. the results of the actions. For this purpose, it is not always necessary to accurately infer the true actions. As shown in Figure 5 left, when AIME-NoB reach the expert performance, the action mse is still on the scale of 0.1 which is not so accurate. On the other hand, our method is purely data-driven. If we have a new form of robot, our method can be deployed by just collecting some data to pretrain a world model, and use the observation-only demonstrations. The other hard-designed methods are mainly developed for robot arms. Thus, when a new form of robot comes into play, the algorithm may need to be redesigned to fit the new robot. In this viewpoint, our method is more scalable.\", \"**Re EKB and DKB**:\", \"We are glad to hear that you find the definition helpful! We would like to first clarify:\", \"Regarding $\\\\hat p$, we use the notation to indicate it is an empirical distribution [3]. And since it is formed based on the ground truth action, we also refer it as a ground truth distribution. We are sorry if it causes confusions. Do you have some suggestions about a better notation to use?\", \"$\\\\pi_{\\\\mathrm{demo}}$ is the policy that collects the demonstration dataset. It is defined in Section 2 (Line 131).\", \"Regarding \\\"quantitatively measure to close the loop\\\", we did the ablation in Figure 3. In terms of the theory framework, each line in the figure means:\", \"**Expert:** $\\\\mathcal{R}(\\\\pi_{\\\\mathrm{demo}})$.\", \"**MBBC (oracle):** $\\\\mathcal{R}(\\\\pi_{\\\\omega^*}^{\\\\mathrm{AIME}})$ with Eq. 3 as the $J_{\\\\mathrm{policy}}$.\", \"**AIME:** $\\\\mathcal{R}(\\\\pi_{\\\\psi^*}^{\\\\mathrm{AIME}})$ with Eq. 3 as the $J_{\\\\mathrm{policy}}$ and Eq. 2 as the $J_{\\\\mathrm{model}}$.\", \"**AIME-NoEKB:** $\\\\mathcal{R}(\\\\pi_{\\\\psi^*}^{\\\\mathrm{AIME-NoEKB}})$ with Eq. 3 as the $J_{\\\\mathrm{policy}}$ and **Eq. 4** as the $J_{\\\\mathrm{model}}$. (only change the model learning to overcome EKB)\", \"**AIME-NoB:** $\\\\mathcal{R}(\\\\pi_{\\\\psi^*}^{\\\\mathrm{AIME-NoB}})$ with **Eq. 7** as the $J_{\\\\mathrm{policy}}$ and Eq. 4 as the $J_{\\\\mathrm{model}}$. (further change the policy learning objective to overcome DKB)\"], \"the_ekb_and_dkb_for_aime_can_be_measured_by\": \"- $\\\\text{EKB} = \\\\mathcal{R}(\\\\pi_{\\\\omega^*}^{\\\\mathrm{AIME}}) - \\\\mathcal{R}(\\\\pi_{\\\\psi^*}^{\\\\mathrm{AIME}})$\\n- $\\\\text{DKB} = \\\\mathcal{R}(\\\\pi_{\\\\mathrm{demo}}) - \\\\mathcal{R}(\\\\pi_{\\\\omega^*}^{\\\\mathrm{AIME}})$\\n\\nThen given $\\\\mathcal{R}(\\\\pi_{\\\\psi^*}^{\\\\mathrm{AIME-NoEKB}}) \\\\approx \\\\mathcal{R}(\\\\pi_{\\\\omega^*}^{\\\\mathrm{AIME}})$, we argue that EKB is overcome by **Eq. 4**. Then, given $\\\\mathcal{R}(\\\\pi_{\\\\psi^*}^{\\\\mathrm{AIME-NoB}}) = \\\\mathcal{R}(\\\\pi_{\\\\mathrm{demo}})$, we argue the DKB is further overcome by **Eq. 7**. We could potentially also have the oracle $\\\\mathcal{R}(\\\\pi_{\\\\omega^*}^{\\\\mathrm{AIME-NoB}})$ with Eq. 7 as the $J_{\\\\mathrm{policy}}$ to compute the EKB and DKB for AIME-NoB. But given AIME-NoB already reaches the expert performance, we can infer $\\\\mathrm{EKB}=\\\\mathrm{DKB}=0$ for AIME-NoB on the walker-run task.\\n\\nWe agree with you that it should be helpful to include the definition in the main text. For now, we keep it in the appendix to ease our discussion, and we will merge that to the main text in the final version.\\n\\n[2] Aljalbout *et al.*, On the role of the action space in robot manipulation learning and sim-to-real transfer, RA-L 2024\\n\\n[3] Empirical distribution function - Wikipedia, https://en.wikipedia.org/wiki/Empirical_distribution_function\"}", "{\"title\": \"Rebuttal (Part 2 / 3)\", \"comment\": \"> EKB and DKB are defined quite informally and then are used extensively, sometimes in a hand-wavy way. I\\u2019m not sure if introducing them is helpful for understanding. I think it would be better to either remove them or define them more formally and measure the extent to which they contribute to poor performance.\\n\\nThanks for the suggestions. We add formal definitions of EKB and DKB as:\\n- $\\\\text{EKB} = \\\\mathcal{R}(\\\\pi_{\\\\omega^*}) - \\\\mathcal{R}(\\\\pi_{\\\\psi^*})$\\n- $\\\\text{DKB} = \\\\mathcal{R}(\\\\pi_{\\\\mathrm{demo}}) - \\\\mathcal{R}(\\\\pi_{\\\\omega^*})$\", \"where\": [\"$\\\\mathcal{R}(\\\\pi) = \\\\mathbb{E}_{a \\\\sim \\\\pi} [\\\\sum_t r_t]$ is the expected accumulate reward under policy $\\\\pi$.\", \"$J_{\\\\mathrm{policy}}(q(a_t|o_{1:T}), \\\\pi(a_t|o_{\\\\leq t}), D_{\\\\mathrm{demo}}, D_{\\\\mathrm{body}}, D_{\\\\mathrm{online}})$ is the learning objective for the policy depending on an action-inference model $q(a_t|o_{1:T})$. For behaviour cloning based methods like BCO and AIME, it is essentially equivalent to $- \\\\sum_{o_{1:T} \\\\in D_{demo}} \\\\sum_{t} D_{\\\\text{KL}}(q(a_t|o_{1:T}) | \\\\pi(a_t|o_{\\\\leq t}))$. In this paper, we use Eq. 3 for AIME and Eq. 7 for AIME-NoB.\", \"$\\\\phi^* = \\\\arg \\\\max_\\\\phi J_{\\\\mathrm{model}}(D_{\\\\mathrm{body}}, D_{\\\\mathrm{online}}, \\\\phi)$ is the optimal parameter for maximising the model learning objective $J_{\\\\mathrm{model}}$, e.g. Eq. 2 for AIME and Eq. 4 for AIME-NoB.\", \"$\\\\hat p_{\\\\pi_{\\\\mathrm{demo}}}(a_t|o_{1:T})$ is the ground truth of empirical distribution of the demonstration data that serves as an oracle.\", \"$\\\\omega^* = \\\\arg \\\\max_\\\\omega J_{\\\\mathrm{policy}}(\\\\hat p_{\\\\pi_{\\\\mathrm{demo}}}(a_t|o_{1:T}), \\\\pi_{\\\\omega}(a_t|o_{\\\\leq t}), D_{\\\\mathrm{demo}}, D_{\\\\mathrm{body}}, D_{\\\\mathrm{online}})$ represents the optimal policy parameters for maximising $J_{\\\\mathrm{policy}}$ with the oracle.\", \"$\\\\psi^* = \\\\arg \\\\max_\\\\psi J_{\\\\mathrm{policy}}(q_{\\\\phi^*}(a_t|o_{1:T}), \\\\pi_{\\\\psi}(a_t|o_{\\\\leq t}), D_{\\\\mathrm{demo}}, D_{\\\\mathrm{body}}, D_{\\\\mathrm{online}})$ represents the optimal policy parameters for maximising $J_{\\\\mathrm{policy}}$ with the learned model.\", \"Based on these definitions, it is clear that:\", \"To reduce the EKB, we need to bring the learned inference model closer to the oracle, i.e. to minimise $\\\\sum_{o_{1:T} \\\\in D_{\\\\text{demo}}} D_{\\\\text{KL}}(\\\\hat p(a_{1:T-1}|o_{1:T}) | q_{\\\\phi^*}(a_{1:T-1}|o_{1:T}))$. At iteration $k$, The online interactions we proposed allow the agent to rollout the current policy $\\\\pi_{\\\\psi^*_{k}}$ and collect a new trajectory. This new trajectory together with previously collected trajectories are used to train the model, i.e. improve the model parameter to $\\\\phi^*_{k+1}$ by maximising $J_{\\\\mathrm{model}}$. This parameter update serves to reduce the gap between the oracle and the learned model. Then the model parameter $\\\\phi^*_{k+1}$ is used to improve the policy parameter to $\\\\psi^*_{k+1}$ by maximising $J_{\\\\mathrm{policy}}$. Thus, through this iterative process, the EKB can be reduced.\", \"To reduce the DKB without increasing the number of demonstrations, we modified the policy learning objective $J_{\\\\mathrm{policy}}$ by adding the value gradient loss based on the surrogate reward. The surrogate reward improves the policy $\\\\pi_{\\\\psi^*}$ on states not covered by the demonstrations. Thus, the DKB can be reduced.\"]}", "{\"summary\": \"This paper extends AIME to AIME-NoBarries, which addresses EKB with online interaction and DKB with online trajectories labeled with surrogate rewards. The authors evaluate the effectiveness of AIME-NoB in DMC and MetaWorld.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"AIME-NoB alleviates EKB through online interaction. To address DKB, the most natural way is to collect more expert demonstrations, which is expensive in terms of robot manipulation. Therefore, AIME-NoB circumvents this burden by providing reward signals using surrogate models and optimizes the policy with reward-labeled online trajectories.\", \"Experiments in DMC and MetaWorld show that AIME-NoB can bring significant improvement to AIME.\"], \"weaknesses\": [\"The logic is confusing. The authors state that in order to alleviate EDK, they additionally use online interactions. It is obvious that online trajectories will bring more benefits since the behavior policy to collect offline datasets differs from the current policy. So the improvement compared to AIME might be largely caused by the setting difference rather than algorithm improvement.\", \"The setting is very complicated. To my understanding, AIME's setting includes an offline dataset $(s_{\\\\mathbf{off}},a_{\\\\mathbf{off}})$ for world model training, an expert dataset $(s_{\\\\mathrm{expert}})$ for imitation learning. And AIME-NoB additionally assumes access to the environment to collect an online dataset $(s_{\\\\mathrm{on}},a_\\\\mathrm{on})$. The complexity of the setting makes it hard to make fair experimental comparisons as baselines might only assume access to a part of the datasets.\"], \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer 4pZy,\\n\\nSince the discussion phase is coming to an end, it would be great if you could take a moment to respond to the authors' rebuttal. I'm curious to know whether their response addresses your concerns.\"}", "{\"summary\": \"This paper studies the problem of imitation learning (IL) from (visual) observation, i.e., demonstrations do not contain action information using pretrained world models. The paper identifies two key bottlenecks in current IL algorithms, namely (1) OOD observations, and (2) OOD task configurations, which the authors refer to as the Embodiment Knowledge Barrier (EKB) and the Demonstration Knowledge Barrier (DKB), respectively. The key technical contribution is an algorithmic extension of the method Action Inference by Maximizing Evidence (AIME); the core idea is to finetune the learned IL policy via limited online interaction and a RL objective, with surrogate rewards derived from the demonstration dataset. Experiments are conducted on DMControl and Meta-World from visual observations, and the proposed method compares favorably to both AIME without finetuning as well as other IL methods that can be finetuned online (BCO, OT, PatchAIL).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is generally well written and easy to follow. Illustrations (Figure 1 in particular) are useful for understanding the proposed method, and Section 2 provides sufficient background for an unfamiliar reader to appreciate the contributions.\", \"Experiments are reasonably extensive, covering a variety of tasks from DMControl and Meta-World, as well as several (what appears to be) baselines appropriate for the problem setting. Empirical results are strong compared to baselines.\", \"It is a purely empirical paper, but there are sufficient ablations to understand how the algorithm behaves in different settings (e.g. number of demos), and which components are the main drivers of performance (e.g. choice of surrogate reward).\"], \"weaknesses\": [\"I appreciate that the authors conduct experiments on both tasks from DMControl and Meta-World, and consider tasks with varying difficulty in the case of Meta-World. However, I would have liked to see a few examples of tasks that are more challenging, i.e., where the proposed method (along with baselines) struggle a bit more. For DMControl this could be e.g. any of the Humanoid or Dog tasks, and for Meta-World this could be e.g. Stick Pull / Push or Pick Place (Shelf).\", \"I find that some of the claims / conclusions are a bit exaggerated relative to what the experimental results show. For example, the authors claim that \\\" the model pretrained on MW-mt50 offers much better results\\\" (L424) while the results in Figure 4(c) show a fairly small difference between the two curves (absolute ~15% increase in success rate on avg it seems). I would prefer claims to accurately reflect the evidence.\"], \"questions\": \"I would like the authors to address my comments listed in \\\"weaknesses\\\" above during the rebuttal. I have one additional question:\\n\\n- Figure 10 indicates that the proposed method succeeds in all of the considered tasks except Cartpole Swingup. I would have expected this task to be the easiest of them all; can the authors please comment on why the method fails on this particular task?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear reviewer Yddj,\\n\\nSince the discussion phase is coming to an end, it would be great if you could take a moment to respond to the authors' rebuttal. I'm curious to know whether their response addresses your concerns.\"}", "{\"title\": \"Reply\", \"comment\": \"**Re contribution**: I still think that the significance of the methodological contribution is minor. Though, this is perhaps merely a difference in opinion, rather than a point worth arguing over.\\n\\n**Re real-world use-cases**:\\n- I don't think that there is \\\"immediate relevance\\\" to open-x and torque-controlled robots. There is a substantial gap between narrow simulated environments and the diverse real data in open-x. Furthermore, even doing any online RL or IL with torque-controlled robots is unfortunately rather challenging. Though, it is a nice hypothetical use-case.\\n- For manual, kinesthetic demonstrations, prior works have been able to impute reasonable action estimates using proprioceptive information/encoder readings and using a well-tuned joint or end-effector controller (or even using SLAM, like with UMI). To my knowledge, it remains an open question on whether you can outperform these action estimates with an ILFO method or if the gap between achieved and commanded actions is small enough. Showing that this method could significantly outperform such action estimates would be interesting/convincing, and perhaps could be done in simulation.\\n\\n**Re EKB, DKB** The formal definition of EKB and DKB is useful! The clarity could be improved, e.g. using $\\\\hat{p}$ to refer to a ground truth distribution is confusing/atypical, and $\\\\pi_\\\\text{demo}$ is never defined. To fully close the loop on the intuition provided in the paper, it would be nice to quantitatively measure these in the ablations to empirically validate the intuition. I also think that it would be valuable to integrate the definitions in the main text of the paper.\\n\\n**Experiments** The new experiments, especially including humanoid stand, are quite nice.\\n\\nSome of my concerns have been addressed (namely on EKB/DKB definitions and the experiments), so I will raise my score to a 5.\"}", "{\"title\": \"Rebuttal (Part 3 / 3)\", \"comment\": \"> I appreciate that the experiments are on visual observations, though they would be stronger if they included more complex tasks such as dextrous tasks, longer horizon tasks, or tasks with greater diversity/generalization\\n\\nThanks for the suggestions. We have conducted new experiments on more complex tasks as also suggested by reviewer SkmK. As from previous benchmark results, baselines BCO($\\\\alpha$), OT and PatchAIL are not well-performed, so we are not expecting them to be good on these even harder tasks. Thus, we currently mainly compare between AIME-NoB with AIME. If time permitted during the rebuttal phase, we will add these results later. All the results are available in Appendix L of the revised paper.\\n\\nFor MetaWorld, we include four additional tasks, namely pick-place, shelf-place, stick-pull and stick-push. The results are shown in Figure 15. From the results, AIME-NoB reliably outperforms AIME. \\nEven in the very hard tasks stick-pull and stick-push, AIME-NoB manages around 60% success rates. However, AIME-NoB doesn't performs so well on pick-place and shelf-place. We conjecture it is due to the visual difficulties of the tasks. It is known that the world models based on reconstruction loss struggle at modeling small objects, which is the small cube we need to pick-up in these two tasks. Improving the world models' ability of modeling small objects or increase the resolutions of the observations will likely improve the performance. \\n\\nFor DMC, we conduct additional experiments on humanoid-stand, and show the results in Figure 16. Results show AIME-NoB's capability to make progress on and potentially solve complex tasks like Humanoid. To the best of our knowledge, this is the first time that an ILfO algorithm show progress on vision-based humanoid task.\"}" ] }
EbG3PV7RaN
RePaFormer: Ferocious and Scalable Acceleration of MetaFormers via Structural Reparamterization
[ "Xuwei Xu", "Yang Li", "Yudong Chen", "Jiajun Liu", "Sen Wang" ]
We reveal that feed-forward network (FFN) layers significantly contribute to the latencies of Vision Transformers (ViTs). This effect scales up quickly as the model size escalates, and hence presents a major opportunity in efficiency optimization for ViTs via structural reparameterization on FFN layers. However, directly reparameterizing the linear projection weights is difficult due to the non-linear activation in between. In this work, we propose an innovative channel idle mechanism that establishes a linear pathway through the activation function, facilitating structural reparameterization on FFN layers during inference. Consequently, we present a family of efficient ViTs embedded with the introduced mechanism called **RePa**rameterizable Vision Trans**Formers** (RePaFormers). This technique brings remarkable latency reductions with small sacrifices (sometimes gains) in accuracy across various MetaFormer-structured architectures investigated in the experiments. The benefits of this method scale consistently with model sizes, demonstrating increasing efficiency improvements and narrowing performance gaps as model sizes grow. Specifically, the RePaFormer variants for DeiT-Base and Swin-Base achieve 67.5% and 49.7% throughput accelerations with minor changes in top-1 accuracy (-0.4% and -0.9%), respectively. Further improvements in speed and accuracy are expected on even larger ViT models. In particular, the RePaFormer variants for ViT-Large and ViT-Huge enjoy 66.8% and 68.7% inference speed-ups with +1.7% and +1.1% higher top-1 accuracies, respectively. RePaFormer is the first to employ structural reparameterization on FFN layers to expedite ViTs to our best knowledge, and we believe that it represents an auspicious direction for efficient ViTs. Codes are provided in the supplementary material.
[ "Efficient ViT", "Structural Reparameterization", "FFN Acceleration" ]
Reject
https://openreview.net/pdf?id=EbG3PV7RaN
https://openreview.net/forum?id=EbG3PV7RaN
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zRvPokuVAU", "yM3wAOI8Yu", "yFTLMkDWwe", "xscNd4FdVl", "wohB5hedJr", "wLzftmUY6V", "rAT8Jzo05q", "pwZGZAYMl9", "nczjYXy3j4", "mJVoUbCUkY", "lKyMxd4rJ4", "ht9Kx1Kjks", "eldXMPJ4tt", "ee3kzpUyRm", "eM6QSsAYJC", "eKk3GeDzqg", "cb2fLqo5A3", "boA2yUrCz6", "biHzF9flSo", "Zhia6QOmQw", "XjEQz1HBgA", "VgbS4kgyqe", "VKfARwfw4P", "NEm9Jbeut9", "LzzF8Yd7K7", "KwBZwUh98z", "KXkCPvQC31", "K7xff6a0H0", "JtitCYcxUw", "JNhYT17zs9", "IouRifb1NA", "IlMWKDBs6L", "HpOjJRdI06", "GB26JZWdUe", "FRHEGf7fG4", "DfHJ7v9NXQ", "BksziY9WaA", "7m6rYGGqLf", "5X7919BPS5", "4huliWlyjF", "3lFTxviznD", "2XPdlzmF1S", "1T8q2OaTyj" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730779362272, 1732520704890, 1732335686448, 1732062939611, 1732801656772, 1732613896711, 1731731060013, 1733060408167, 1731822116453, 1732777838823, 1731826589508, 1732063067792, 1732339256248, 1730546164148, 1731921262569, 1733619909860, 1732480920452, 1731733062290, 1732610929493, 1732777309976, 1732509987899, 1732335951592, 1732062853220, 1730814802367, 1732336101812, 1733060627886, 1732603932902, 1732605232608, 1731656076557, 1731733578473, 1731823467098, 1733060009328, 1732637651487, 1732802056922, 1732337459848, 1730663634300, 1732336055121, 1732778557465, 1732062987961, 1737523459174, 1731919601206, 1732778649692, 1732545978212 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1588/Reviewer_wqPR" ], [ "ICLR.cc/2025/Conference/Submission1588/Reviewer_ufAp" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Reviewer_NBjZ" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Reviewer_ufAp" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Area_Chair_dTod" ], [ "ICLR.cc/2025/Conference/Submission1588/Reviewer_wqPR" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Reviewer_ufAp" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Reviewer_NBjZ" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Reviewer_nzL3" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Reviewer_ufAp" ], [ "ICLR.cc/2025/Conference/Submission1588/Reviewer_nzL3" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ], [ "ICLR.cc/2025/Conference/Submission1588/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper studies the feed-forward network (FFN) layers in MetaFormer architecture and finds it play a significant role in introducing latencies. Based on this observation, the authors propose ReParameterizable Vision Transformers (RePaFormers) with the structural reparameterization technique and reduces the latency remarkably with minor sacrifice in accuracy. Extensive experiments on various tasks and datasets demonstrate the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The idea of applying structural reparameterization in FFN layers in great and it brings actually speedup in GPU latency.\\n2. The experimental results are extensive as the method is validated not only on classification tasks, but on downstream tasks and self-supervised learning setting as well, which highlight the generalization ability of the proposed method.\\n3. Overall, the paper is clear written and well-organized.\", \"weaknesses\": \"1. Advantage over other model compression techniques. The goal of the proposed method is to increase the efficiency of current architectures, while it can be also realized by other model compression techniques, including pruning, distillation or quantization. The reviewer understands that comparing with those methods is out of the scope of this work, but it would be great if the authors could provide some justification of the advantage of the proposed method. For example, whether the proposed method is more generalizable across model architectures, easier to implement, or has less impact on accuracy compared to other techniques.\\n2. Performance gap with self-supervised baselines. It is noticeable that the performance gap with self-supervised baselines is larger than the gap in supervised learning, which may hinder its application on foundation models. Meanwhile, it is also unknown that if the proposed method will brings negative impact on the generalization ability of self-supervised learning methods, and experiments on downstream tasks like fine-grained classification may validate this point.\\n3. Performance gap at downstream tasks. It is also noteworthy that the performance gap at dense prediction tasks is non-negligible compared to the gap in classification tasks. It would better if the authors could provide some explanations or analysis.\", \"questions\": \"Apart from the questions in weakness, the reviewer has two additional questions:\\n\\n1. Training costs. What would be the training time of the proposed method compared to the vanilla version?\\n2. The authors have mentioned in the abstract that 'improvements in speed and accuracy are expected on even larger models', which may not be convincing enough, as the improvements in accuracy should be supported by empirical results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Could you clarify what you mean by:\\n\\n\\\"However, with the BatchNorm not being reparameterized, the shortcut layer cannot be reparameterized into the linear projection either. So in the supplementary experiments, we will keep the shortcut in each FFN layer.\\\"\\n\\nThank you in advance for your explanation.\"}", "{\"title\": \"Kind Request for Discussion and Reconsideration of the Score\", \"comment\": \"We sincerely thank the reviewer for the valuable suggestions and insightful comments. In our response, we have carefully addressed all the raised concerns regarding 1) the distinction between our method and RepVGG, 2) performance on smaller models, 3) latency improvement on dense prediction tasks and other minors.\\n\\nIf the reviewer has any further questions, __we are glad to join in the discussion__. Otherwise, if our responses have satisfactorily resolved the concerns, __could the reviewer reconsider the score based on our clarifications and responses?__\"}", "{\"title\": \"Kind Request for Rebuttal Discussion or Reconsideration of the Score\", \"comment\": \"We thank Reviewer wqPR for the valuable feedback and insightful suggestions, which have helped us refine and clarify our work. We have carefully addressed all the raised concerns in our response.\\n\\nWe would greatly appreciate it if Reviewer wqPR could provide further feedback. Your input is invaluable to ensuring the quality and clarity of our work. Or, __if our responses have satisfactorily resolved the concerns, we respectfully request reconsideration of the score based on the clarifications and improvements provided__.\"}", "{\"title\": \"Response to Reviewer ufAp's Further Concerns (Part 1)\", \"comment\": \"We thank Reviewer ufAp for the reply and are pleased to know that most of your concerns have been addressed. We would like to answer your further concerns as below:\\n\\nFirstly, __we respectfully disagree with the assertion that _\\\"the improvements over the new baselines appear to be modest\\\"___. When using Swin-Small and Swin-Base as backbones, training with our method improves top-1 accuracy by 2.5% and 2.3% compared to their _new baseline_ variants, respectively. In particular, our method achieves 1.7% higher top-1 accuracy on ViT-Large compared to the new baseline with only 29.8% more training time (223.3 vs 172.0 GPU$\\\\cdot$hours). __These improvements in the top-1 accuracy are substantial, especially in the context of large ViT models, which clearly demonstrate the effectiveness of our approach__.\\n\\nSecondly, we believe that the additional training cost is reasonable given the performance gains. The rough total training times (GPU$\\\\cdot$hours) on our HPC for these models are provided in the table below.\\n\\n|Backbone|Vanilla|RePaFormer|New Baseline|\\n|:-|:-|:-|:-|\\n|DeiT-Tiny|60.0|72.3|51.3|\\n|DeiT-Small|70.7|90.0|63.3|\\n|DeiT-Base|93.3|123.3|86.7|\\n|ViT-Large|196.3|223.3|172.0|\\n|Swin-Tiny|117.3|139.3|105.3|\\n|Swin-Small|180.0|215.7|158.7|\\n|Swin-Base|210.7|272.0|184.0|\\n|LV-ViT-S|159.3|189.3|144.0|\\n|LV-ViT-M|209.7|250.3|-|\\n\\nOn average, our method incurs approximately 30\\\\~50% more training time. For comparison, when pre-training the vanilla ViT on the JFT-300M dataset, ViT-Huge requires 2.5k TPUv3-core-days while ViT-Large takes only 0.68k TPUv3-core-days\\u2014267.6% more training time for just a 0.8% top-1 accuracy increase (statistics taken from the original ViT paper [1]). In contrast, __the accuracy improvements achieved by our method with significantly less overhead highlight the practical value and efficiency of our approach__.\\n\\nThirdly, the new baseline (i.e., using $C^2$ projection weights in the FFN layer during training) introduces instability to the training process within the LV-ViT training framework that incorporates token labelling. When taking LV-ViT-M as the backbone, the loss explodes after only a few epochs (fewer than 5), even when the learning rate is still small during the warm-up phase or a gradient clipping is applied. While exploring methods to stabilize the training of this new baseline with token labelling schema is an intriguing research topic, it lies beyond the scope of this work. __As a result, we cannot agree that the absence of large-scale hyperparameter tuning for this specific new baseline on LV-ViT-M should diminish the contribution of this work__.\\n\\nIn conclusion, the trade-off between accuracy gain and training costs is justified and worthwhile.\"}", "{\"title\": \"Further response\", \"comment\": \"Thank the authors for preparing the rebuttal! Although some of my questions have been addressed, I still have the same concern regarding the insufficient experimental results.\\n\\nSpecifically, although the authors want to emphasize the narrower performance gap for larger models, another interpretation of the results is that the methodology may not be applicable to all model scales or model types. For example, according to Table 2, even for the large PoolFormer-s36, the resulting RePa-PoolFormer-s36 from the proposed method performs much worse than the vanilla PoolFormer-s24 in terms of the accuracy-efficiency trade-off. This suggests that the proposed method should not be applied to this model type or scale. Similar cases can also be found in Table 2 of the paper.\\n\\nAdditionally, the reason I am curious about comparisons with other compression techniques, although orthogonal, is that it is unclear when the proposed method should be used and under what conditions it will be beneficial. Therefore, it is highly desirable for the authors to clearly state the application scenarios where the proposed method is the preferred choice.\"}", "{\"title\": \"Response to Reviewer NBjZ (Part 1)\", \"comment\": \"We sincerely thank Reviewer NBjZ for the valuable comments, especially the recognition that _the proposed method is motivated by a comprehensive latency analysis_.\\n\\n&nbsp;\\n\\n---\\n\\n__1. Response to W2 and Q1__ ___(what makes our method distinct from RepVGG)___\\n\\nWe would like to first respond to the Reviewer's major concern about the differences between our structural reparameterization method and RepVGG-style reparameterization. The differences are threefold:\\n\\n1. __Different reparameterization solutions__: The key difference is that RepVGG reparameterizes __horizontally__ across parallel convolutional kernels, while RePaFormer reparameterizes __vertically__ on consecutive linear projection weights.\\n \\n For instance, RepVGG reparameterizes two _parallel_ convolutional branches with kernels $W_1^{\\\\text{Conv}}$ and $W_2^{\\\\text{Conv}}$ by summing them:\\n $$\\n W_{\\\\text{Rep}}^{\\\\text{Conv}} = W_1^{\\\\text{Conv}} + W_2^{\\\\text{Conv}}.\\n $$\\n\\n On the contrary, as demonstrated in Equation 6, RePaFormer reparameterizes two _consecutive_ projection weights $W_1^{\\\\text{FFN}}$ and $W_2^{\\\\text{FFN}}$ by multiplying them:\\n $$\\n W_{\\\\text{Rep}}^{\\\\text{FFN}} = W_1^{\\\\text{FFN}} \\\\cdot W_2^{\\\\text{FFN}}.\\n $$\\n\\n (In the above example, we omit the BatchNorm and suppose $W_1^{\\\\text{Conv}}$ and $W_2^{\\\\text{Conv}}$ have been padded to the same shape.)\\n\\n2. __Different target components__: RepVGG and RepVGG-style methods apply reparameterization to multi-branch convolutional layers in CNNs, while our RePaFormer targets FFN layers in ViTs. Their application targets are distinct.\\n\\n3. __Different scopes__: Although some previous works [1,2] have attempted to adapt RepVGG-style reparameterization on ViTs by incorporating multi-branch convolutions into the ViT backbone, they only reparameterize the convolutional parts. The main scope of these works is to construct novel mobile-friendly architectures. In contrast, our method is the first to apply structural reparameterization to FFN layers and accelerate existing ViTs/MetaFormers of all sizes.\\n\\nMoreover, we kindly argue that our __channel idle mechanism cannot be regarded as a special case of a dual-branch structure in RepVGG__. In RepVGG, all branches must be linear so that they can be reparameterized, whereas in our approach, only one branch is linear while the other one is nonlinear.\\n\\nNonetheless, __we would not claim our RePaFormer to be more advantageous than RepVGG, as they are parallel approaches solving different problems that can be used simultaneously__. We still appreciate the Reviewer for this comment and will include a detailed comparison with vanilla RepVGG-style reparameterization in our revised version.\\n \\n&nbsp;\\n\\n---\\n\\n__2. Response to W1__ ___(scaling poorly to smaller models)___:\\n\\nWe thank the Reviewer for this comment, which gives us a chance to emphasize our main claim again. As stated in Lines 28, 106-107, 421-423, 464-466 and Tables 2, the most important characteristic of our method is that __it consistently yields a more substantial speed gain and a much narrower performance gap when the backbone model complexity increases__. We anticipate this method to be increasingly effective on larger Transformer/MetaFormer-based models, aligning well with the growing significance of large foundation models today.\\n\\nMoreover, in Appendix A.2, we have also observed the mentioned problem and explained that _after applying the channel idle mechanism with a high idle ratio (e.g., 75%), tiny models would lack sufficient non-linear transformations_, which is the major reason for the performance drop on smaller ViTs. It is commonly acknowledged that smaller ViTs are less robust and suffer more severe performance declines when compressed, which holds for both token pruning [3,4] and parameter pruning methods [5,6].\", \"we_would_like_to_empirically_validate_it_by_presenting_the_performance_of_small_size_vit_models_with_various_idle_ratios\": \"|Model|Idle ratio ($\\\\theta$)|#MParam.|Complexity (GMACs)|Speed (img/s)|Top-1 acc. (%)|\\n|:-|-|-|-|-|-|\\n|RePa-DeiT-Tiny||||||\\n||75%|3.5|0.8|4323.8|64.2|\\n||50%|4.4|1.0|3904.2|69.2|\\n||25%|5.3|1.2|3555.1|71.9|\\n||0% (vanilla) |5.7|1.3|3372.2|72.1|\\n|RePa-Swin-Tiny||||||\\n||75%|17.5|2.6|1016.3|78.5|\\n||50%|21.8|3.3|927.8|80.5|\\n||25%|26.1|4.0|864.9|81.4|\\n||0% (vanilla)|28.3|4.5|789.8|81.2|\\n|RePa-PoolFormer-s12||||||\\n||75%|6.0|0.8|4000.2|70.5|\\n||50%|8.4|1.2|3345.4|74.3|\\n||25%|10.7|1.6|2910.1|76.8|\\n||0% (vanilla)|12.0|1.9|2450.0|77.2|\\n\\nAs the table shows, __our RePaFormers demonstrate narrow performance gaps on smaller models when the idle ratio is less rigorous (i.e., $\\\\theta$ = 25%)__. While scaling to small or tiny-sized models is not the primary focus of this work, our method still shows effectiveness in these cases. In addition, this hyperparameter sensitivity study is insightful and will be added to the revised version.\"}", "{\"comment\": \"Dear Reviewer nzL3,\\n\\nWe sincerely thank you for your continued engagement in the discussion. In our previous response, we made every effort to address your further concerns and to clarify the key contributions and significance of our work, including:\\n\\n* Explaining that our method is simpler to implement than network pruning.\\n\\n* Providing additional experimental results on large ViT models, where our method achieves both improved efficiency and increased accuracy.\\n\\n* Demonstrating the transformative contribution of our work for large-scale foundation models in vision tasks.\\n\\n__We greatly appreciate your time in carefully reviewing our further response. If it satisfactorily resolves all your concerns, we would be deeply grateful for your support of our work and reconsideration of the score.__\\n\\nBest regards, \\nAuthors\"}", "{\"title\": \"Response to Reviewer wqPR (Part 1)\", \"comment\": \"We sincerely appreciate Reviewer wqPR for all the contributive comments and support to our innovative work, especially highlighting that our ___idea is great and the experiments are extensive___.\\n\\n&nbsp;\\n\\n---\\n\\n__1. Response to W1__ ___(justification of the advantage of the proposed method)___:\\n\\nWe thank the reviewer for acknowledging that _direct comparisons with other network compression methods are_ ___out of the scope of this work___. And we admit that highlighting the significance of our method and demonstrating the advantages over other compression methods is necessary in our work.\\n\\nFirst, we would like to emphasize the significance of our method as follows:\\n\\n1. __Pioneering the acceleration of FFN layers__: The FFN layer in MetaFormer-structured models is essential and indispensable. Specifically, [1] empirically demonstrates that the token mixer (e.g., self-attention or convolution) can be replaced by simpler operations (e.g., pooling), while the FFN layer should remain indispensable. Besides, [2] investigates the critical role of the FFN layer in the general Transformer architecture.\\n\\n However, despite its importance, the FFN layer constitutes a significant portion of the inference time, as shown in our study, yet there has been little research on accelerating this component. To the best of our knowledge, this work is the first to specifically target the acceleration of the FFN layer.\\n\\n2. __Generalizability to various MetaFormer-structured models__: The MetaFormer structure has been widely validated as the de facto architecture for computer vision tasks by various works [3,4]. Our method can be seamlessly integrated into these models without requiring specialized design modifications. This generalizability has been thoroughly demonstrated through the results presented in Tables 1 and 2. \\n\\n3. __Scalability to larger models__: As shown in Table 2, our method achieves greater speed gains, smaller model sizes, and narrower performance gaps as the model size increases within the same architecture. when scaling from DeiT-Tiny to DeiT-Base, the accuracy drop decreases significantly from 7.9% to 0.4%, while the inference speed gain increases from 32.6% to 67.5%.\\n\\nNext, when compared with other compression methods, our method has the following advantages:\\n\\n1. __Hardware friendly__: Our reparameterized model is dense and structurally regular, making it efficient to run on general-purpose hardware without requiring specialized hardware support. On the contrary, quantization needs support for low-precision computations, and pruning usually requires support for sparse matrix operations.\\n\\n2. __Easy to implement__: Our RePaFormer is compatible with existing training and deployment pipelines and can be seamlessly embedded into existing MetaFormer-structured models. There is no need for specific adjustments in the training framework like quantization, distillation or pruning.\\n\\nIn addition, we would also like to compare our method with a state-of-the-art pruning method, DC-ViT [5] (CVPR24), since pruning methods are closer to our approach. Given that DC-ViT has not released the code, we only compare the latency drop reported in the paper. As the table below shows, our method delivers comparable performance to the state-of-the-art pruning method and achieves a better trade-off on larger backbones.\\n\\n|Model|Latency drop|Top-1 acc. (%)|\\n|:-|-|-|\\n|DC-ViT-T|-16.5%|__64.4__|\\n|RePa-ViT-T|__-24.6%__|64.3|\\n|DC-ViT-S|-16.7%|__78.6__|\\n|RePa-ViT-S|__-35.3%__|77.1|\\n|DC-DeiT-B|-16.7%|81.3|\\n|RePa-DeiT-B|__-40.3%__|__81.4__|\\n\\nIn conclusion, the suggestion to justify our advantages over existing methods is very constructive and insightful. We will take this suggestion and include the discussion in our revised version. Nonetheless, __it is worth noting that our method is parallel to these model compression methods and can be combined with them to achieve further acceleration__. We hope our approach provides a promising direction for accelerating large ViT models.\"}", "{\"title\": \"Responses to Reviewer NBjZ's Further Concerns\", \"comment\": \"We sincerely appreciate Reviewer NBjZ's reply and thoughtful suggestions. The recommendation of clearly stating the application scenario is highly valuable and will greatly enhance the clarity and impact of our work.\\n\\nIn our original manuscript, we have done a series of experiments to explore the adaptability of our method across various architectures, such as PoolFormers. However, through discussions with you and other Reviewers, and especially inspired by Reviewer wqPR, we have gained a chance to present the key advantages and application scenarios of our approach: __RePaFormer can significantly speed up large ViT models while even improving accuracy__. This insight demonstrates the __practical value of RePaFormer in accelerating large-scale models without compromising performance, making it an effective solution for large real-world applications requiring both speed and precision__.\\n\\nTo empirically validate the practicality above, we conducted additional experiments on large vanilla ViTs, following Reviewer wqPR's kind suggestion. Both the vanilla and RePaFormer variants of ViT-Large and ViT-Huge are trained from scratch on the ImageNet-1k dataset using the same training recipes with the idle ratio set to 75% by default. The new results, along with those for MLPMixer-l16 reported in the original manuscript, are shown in the table below:\\n\\n|Model|#MParam.|Complexity (GMACs)|Throughput (img/s)|Top-1 accuracy|\\n|:-|:-|:-|:-|:-|\\n|ViT-Large|304.3|59.7|124.2|80.3%|\\n|RePaViT-Large|__178.4__ (-41.4%)|__34.9__ (-41.5%)|__207.2__ (+66.8%)|__82.0%__|\\n|ViT-Huge|632.2|124.3|61.5|80.3%|\\n|RePaViT-Huge|__369.6__ (-41.5%)|__72.6__ (-41.6%)|__103.8__ (+68.7%)|__81.4%__|\\n|MLPMixer-l16|208.2|44.6|460.0|72.3%|\\n|RePaMLPMixer-l16|__82.2__ (-60.5%)|__20.0__ (-55.2%)|__302.7__ (+89.2%)|__72.6%__|\\n\\nWe are thrilled to emphasize that our method not only drastically reduces model size and latency but also achieves HIGHER top-1 accuracy on large models with more than 200M parameters and computational complexities exceeding 40 GMACs. For instance, RePaViT-Large achieves a 1.7% higher top-1 accuracy (82.0% vs 80.3%) while delivering a 66.8% speed gain (207.2 images/second vs 124.2 images/second) compared to the vanilla ViT-Large. __This demonstrates a transformative contribution, as many practical large-scale foundation models for computer vision tasks rely on vanilla ViT as their backbone, such as CLIP [1], SAM [2] and ViT-22B [3].__\\n\\nIn addition, as asked by Reviewer NBjZ, we further summarize the guidelines for adopting our method as follows:\\n\\n * __More applicable to models with complex token mixers__: When the idle ratio remains constant (such as 75% in our experiments), RePaFormer performs better with more complex token mixers, as witnessed in Table 2.\\n \\n * __More applicable to larger models__: When the backbone architecture and idle ratio are fixed, our method generally achieves narrower accuracy drops, and even accuracy gains, on larger models.\\n \\n * __Smaller idle ratios should be leveraged on smaller models__: In our Response (Part 1), we have provided results for smaller models under different idle ratios to illustrate this point. Smaller models are less robust and need more nonlinearities to improve the feature extraction capability.\\n\\nTo the best of our knowledge, RePaFormer is the __first novel method _(orthogonal to network pruning, quantization and distillation)_ that achieves significant acceleration (\\\\~68%) while having positive gains in accuracy (1\\\\~2%) instead of accuracy drops, on large and huge ViTs__. Considering the unprecedented results RePaFormer is getting, we want to point out that this is a disruptive and timely innovation for the community and a significant addition to the large foundation models acceleration toolkit. Since RePaFormer can be both directly applied to larger ViT architectures and combined with other acceleration techniques such as quantization, we believe RePaFormer will catalyze further research and breakthroughs on ViT's speed and accuracy. __We strongly believe that the weight and impact of this work make it best-suited for the prestigious ICLR, and the community will benefit greatly by seeing it soon from this venue.__\\n\\nWe hope our response can address all your concerns and demonstrate the significance of our contributions. We kindly request your strong support by considering a score increase.\\n\\n&nbsp;\\n\\n[1] Radford, Alec, et al. \\\"Learning transferable visual models from natural language supervision.\\\" ICML, 2021.\\n\\n[2] Kirillov, Alexander, et al. \\\"Segment anything.\\\" ICCV, 2023.\\n\\n[3] Dehghani, Mostafa, et al. \\\"Scaling vision transformers to 22 billion parameters.\\\" ICML, 2023.\"}", "{\"title\": \"Response to Reviewer wqPR (Part 3)\", \"comment\": \"__5. Response to Q2__ ___(improvements in accuracy should be supported by empirical results)___:\\n\\nWe thank the reviewer for this question. First, we would like to clarify that a more precise claim should be: \\\"__Improvements with greater speed gains and narrower accuracy gaps are expected on larger models__\\\". This claim aligns with the statements in Lines 106-107, 421-423, 464-466 and Tables 2, 5, 6.\\n\\nIn addition, we would like to provide results of RePaFormer when using ViT-Large and ViT-Huge as the backbone. The vanilla models and their RePaFormer versions are __trained from scratch solely on the ImageNet-1K dataset with the same training schema__ outlined in the manuscript for fairness. We set the drop path rate at 0.3 for both models. The experiments are ongoing and the results will be updated later (_update: ViT-Large and ViT-Huge results have been updated_).\\n\\n|Model |#MParam. | Complexity (GMACs) | Throughput (img/s) | Top-1 accuracy|\\n|:-|:-|:-|:-|:-|\\n|ViT-Large|304.3|59.7|124.2|80.3%|\\n|RePaViT-Large|178.4 (-41.4%)|34.9 (-41.5%)|207.2 (+66.8%)|82.0%|\\n|ViT-Huge|632.2|124.3|61.5|80.3%|\\n|RePaViT-Huge|369.6 (-41.5%)|72.6 (-41.6%)|103.8 (+68.7%)|81.4%|\\n\\n&nbsp;\\n\\n---\\n\\nIn the end, we sincerely appreciate Reviewer wqPR for all the insightful suggestions. We will emphasize the advantages of our approach in the revised version and include the above experiments and analysis to provide further clarity. And we're willing to answer any further questions. __Given this is a new and novel direction in the efficient ViT domain, we hope to get Reviewer wqPR's strong support by increasing the score.__ \\n\\n&nbsp;\\n\\n[1] Yu, Weihao, et al. \\\"Metaformer is actually what you need for vision.\\\" CVPR, 2022.\\n\\n[2] Geva, Mor, et al. \\\"Transformer feed-forward layers are key-value memories.\\\" EMNLP, 2021.\\n\\n[3] Zhang, Jiangning, et al. \\\"Rethinking mobile block for efficient attention-based models.\\\" ICCV, 2023.\\n\\n[4] Wang, Ao, et al. \\\"Repvit: Revisiting mobile cnn from vit perspective.\\\" CVPR, 2024.\\n\\n[5] Zhang, Hanxiao, Yifan Zhou, and Guo-Hua Wang. \\\"Dense Vision Transformer Compression with Few Samples.\\\" CVPR, 2024.\\n\\n[6] Guo, Jialong, et al. \\\"SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-parameterized Batch Normalization.\\\" ICML, 2024.\"}", "{\"title\": \"Kind Request for Rebuttal Discussion or Reconsideration of the Score\", \"comment\": \"We thank Reviewer nzL3 for the valuable feedback and insightful suggestions, which have helped us refine and clarify our work. We have carefully addressed all the raised concerns in our response.\\n\\nWe would greatly appreciate it if Reviewer nzL3 could provide further feedback. Your input is invaluable to ensuring the quality and clarity of our work. Or, __if our responses have satisfactorily resolved the concerns, we respectfully request reconsideration of the score based on the clarifications and improvements provided__.\"}", "{\"comment\": \"We sincerely thank for your response. We would like to conduct the experiment as suggested and will update the results soon. However, with the BatchNorm not being reparamterized, the shortcut layer cannot be reparameterized into the linear projection either. So in the supplementary experiments, we will keep the shortcut in each FFN layer.\"}", "{\"summary\": \"The paper proposes RePaFormer, a novel approach that leverages a channel idle mechanism to enable structural reparameterization of Feed-Forward Network (FFN) layers during inference. Experiments on Vision Transformer families show its improved inference speed compared to baselines with minor performance loss.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and well-motivated.\", \"weaknesses\": \"The major concern of the paper is that the current experimental setup raises concerns about the effectiveness of the proposed method.\\n1. **The effects of BatchNorm.** Specifically, if I understand correctly, the vanilla backbone uses LayerNorm while RePaFormer family uses BatchNorm. It is unclear whether BatchNorm alone could improve the test performance, i.e., accuracy, of the vanilla backbone. \\n2. **Similarly, the effectiveness of channel idle mechanism is inadequately tested.** Specifically, consider the default case where $\\\\mu=1$, and 75% percent of the features are idle. This implies for the RePa Linear 3 (Figure 1), the features go through a linear transformation $W_2W_1$ where $W_1 \\\\in \\\\mathbb{R}^{3C \\\\times C}$ and $W_2 \\\\in \\\\mathbb{R}^{C \\\\times 3C}$. However, such transformation can be represented by a $C \\\\times C$ matrix, suggesting that the models use $6C^2$ parameters to learn a linear function that can be represented by just $C^2$ parameters. That means, RePaFormer will be useful only if it is much better in terms of accuracy than the baseline where the channel idle part is processed by a single linear layer with weight\\u00a0 $W \\\\in \\\\mathbb{R}^{C \\\\times C}$. More specifically, this baseline should be in the form of Figure (b) [without BatchNorm inference reparameterization] and test its throughput and performance.\", \"questions\": \"See my weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer nzL3 (Part 2)\", \"comment\": \"__3. Response to W3__ ___(lacking comparisons with previous structured reparameterization methods)___:\\n\\nThe structural reparameterization in our RePaFormers is different from previous reparameterization methods (e.g., RepVGG [10] or FastViT [9]). We would like to compare RePaFormer and FastViT as follows:\\n\\n* __Different target components__: FastViT mainly incorporates multi-branch convolutions into the hierarchical ViT network and only applies reparameterization to multi-branch convolutional layers. In other words, FastViT focuses on __accelerating the token mixer layers__ during inference. In contrast, our RePaFormer targets __accelerating the FFN layers__ in ViTs. Our method can be adapted to all models with the FFN layer.\\n\\n* __Different scopes__: The main scope of FastViT is to construct novel efficient architectures, while our method aims to apply structural reparameterization to FFN layers and accelerate existing ViTs/MetaFormers of all sizes.\\n\\n* __Different reparameterization solutions__: Another key difference is that FastViT reparameterizes __horizontally__ across parallel convolutional kernels, while RePaFormer reparameterizes __vertically__ on consecutive linear projection weights.\\n\\n For instance, FastViT reparameterizes two _parallel_ convolutional branches with kernels $W_1^{\\\\text{Conv}}$ and $W_2^{\\\\text{Conv}}$ by summing them:\\n $$\\n W_{\\\\text{Rep}}^{\\\\text{Conv}} = W_1^{\\\\text{Conv}} + W_2^{\\\\text{Conv}}.\\n $$\\n\\n On the contrary, as demonstrated in Equation 6, RePaFormer reparameterizes two _consecutive_ projection weights $W_1^{\\\\text{FFN}}$ and $W_2^{\\\\text{FFN}}$ by multiplying them:\\n $$\\n W_{\\\\text{Rep}}^{\\\\text{FFN}} = W_1^{\\\\text{FFN}} \\\\cdot W_2^{\\\\text{FFN}}.\\n $$\\n\\n (In the above example, we omit the BatchNorm and suppose $W_1^{\\\\text{Conv}}$ and $W_2^{\\\\text{Conv}}$ have been padded to the same shape.)\\n\\nConsidering these differences, we think __directly comparing a _dedicatedly designed network architecture_ with a _universal network acceleration method_ is unfair__. Nonetheless, we do compare a state-of-the-art reparameterization method [11] within the same scope in our paper. \\n\\nWe still appreciate the reviewer for this comment and will include the above comparison in our revised version.\\n\\n&nbsp;\\n\\n---\\n\\n__4. Response to W4__ ___(the proposed method still suffers from accuracy drops)___:\\n\\nWe respectfully disagree with this weakness.\\n\\n* __Accuracy is indeed preserved post-reparameterization__: As shown in Table 1, the accuracy remains unchanged before and after reparameterization, while the inference speed significantly improves. This demonstrates that the reparameterization process does not lead to accuracy degradation. In addition, we have provided the resource codes in our supplementary materials, which can validate this as well.\\n\\n* __Accuracy gap explanation__: The accuracy gap between our method and vanilla models arises because vanilla models do not have idling channels in activation. Training models with significantly fewer non-linearities inherently leads to a performance drop. This trade-off is common across various model compression methods, where increased efficiency often comes at the cost of some accuracy loss.\\n\\n* __Improved performance with reduced idle ratios__: As shown in Table 4, reducing the idle ratio further improves performance. For example, with a 25% idle ratio, RePa-Swin-Base achieves 83.7% top-1 accuracy, surpassing the vanilla Swin-Base's 83.5% accuracy. This highlights the adaptability of our method to different idle ratios while maintaining competitive accuracy.\\n\\n&nbsp;\\n\\n---\\n\\nWe hope our explanations adequately address the weaknesses raised by Reviewer nzL3. We sincerely appreciate the Reviewer's helpful suggestions and recognition of the significance of our work. We respectfully request reconsideration of the score.\\n\\n&nbsp;\\n\\n[1] Wang, Guo-Hua, and Jianxin Wu. \\\"Practical network acceleration with tiny sets.\\\" CVPR, 2023.\\n\\n[2] Zhang, Hanxiao, et al. \\\"Dense Vision Transformer Compression with Few Samples.\\\" CVPR, 2024.\\n\\n[3] Tang, Yehui, et al. \\\"Patch slimming for efficient vision transformers.\\\" CVPR, 2022.\\n\\n[4] Yu, Shixing, et al. \\\"Unified visual transformer compression.\\\" ICLR, 2022.\\n\\n[5] Yu, Fang, et al. \\\"Width & depth pruning for vision transformers.\\\" AAAI, 2022.\\n\\n[6] Yu, Lu, and Wei Xiang. \\\"X-pruner: explainable pruning for vision transformers.\\\" CVPR, 2023.\\n\\n[7] He, Yang, et al. \\\"Data-independent Module-aware Pruning for Hierarchical Vision Transformers.\\\" ICLR, 2024.\\n\\n[8] Vasu, Pavan Kumar Anasosalu, et al. \\\"Mobileone: An improved one millisecond mobile backbone.\\\" CVPR, 2023.\\n\\n[9] Vasu, Pavan Kumar Anasosalu, et al. \\\"FastViT: A fast hybrid vision transformer using structural reparameterization.\\\" ICCV, 2023.\\n\\n[10] Ding, Xiaohan, et al. \\\"Repvgg: Making vgg-style convnets great again.\\\" CVPR, 2021.\\n\\n[11] Guo, Jialong, et al. \\\"SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-parameterized Batch Normalization.\\\" ICML, 2024.\"}", "{\"metareview\": \"The submission introduces RePaFormer, a structural reparameterization method for FFN layers in Vision Transformers (ViTs), with the promise of improving inference speed while minimizing accuracy loss. Despite extensive rebuttals and additional experiments, the paper fails to convincingly demonstrate a clear and significant advantage over existing methods, including network pruning and other model compression techniques. Concerns about generalizability, applicability, and practicality remain unresolved.\", \"strengths\": [\"Broad Experimental Scope: The paper includes evaluations across various tasks and model sizes, showing promising speed-ups for large ViTs.\", \"Improved Efficiency: The method shows notable inference speed gains for large models, sometimes coupled with minor accuracy improvements.\"], \"weaknesses\": [\"Modest Improvements: The observed improvements are not consistently significant, particularly when accounting for increased training costs and the complexity of implementation.\", \"Weak Comparison Baselines: Direct comparisons with existing structured reparameterization methods and network pruning are either inadequate or reveal comparable performance, diminishing the claimed advantage of the proposed method.\", \"Limited Generalizability: The method is less effective on smaller models and dense prediction tasks, restricting its practical applicability.\", \"Overemphasis on Large Models: The primary benefits are confined to large ViT models, with limited utility for smaller architectures or diverse tasks.\", \"Training Instability: Issues such as collapsed training for certain backbones highlight potential scalability challenges.\", \"While RePaFormer offers some promising results, its contributions are overshadowed by the lack of clear advantages over existing methods and concerns about practicality and applicability. The reviewers' consensus highlights the need for more robust comparative baselines, refined theoretical framing, and a clearer articulation of its unique value proposition. Based on the current submission and reviews, RePaFormer does not meet the acceptance threshold for ICLR 2025\"], \"additional_comments_on_reviewer_discussion\": [\"Reviewer Points and Author Responses:\", \"Baseline Comparisons (NBjZ, nzL3, ufAp)\", \"Concern: Limited comparisons with network pruning, structured reparameterization, and simplified baselines.\", \"Response: Authors added comparisons with pruning methods and new baselines using simplified architectures. Results showed marginal improvements, primarily for large models. Comparisons were deemed insufficient for demonstrating significant advantages.\", \"Applicability to Smaller Models (NBjZ)\", \"Concern: Accuracy drops on smaller models and unclear application scenarios.\", \"Response: Authors provided additional results with smaller idle ratios for small models, showing slight improvements but still lagging behind simpler methods like pruning.\", \"Training Overheads and Stability (ufAp, nzL3)\", \"Concern: High training costs and instability in some baselines (e.g., LV-ViT-M).\", \"Response: Authors justified the training costs as reasonable for large models but acknowledged training instability for certain configurations. The reviewers found this explanation insufficient for broader applicability.\", \"Dense Prediction Tasks (NBjZ, wqPR)\", \"Concern: Reduced speed improvements on dense prediction tasks.\", \"Response: Authors attributed this to increased tensor operations with high-resolution inputs. While plausible, the explanation did not alleviate concerns about limited practicality for such tasks.\", \"Use of BatchNorm and Reparameterization (ufAp)\", \"Concern: Unclear advantage of BatchNorm and reparameterization over simpler alternatives.\", \"Response: Authors presented new baselines with BatchNorm and reparameterization but showed only marginal benefits, raising doubts about the necessity of the proposed complexity.\", \"Novelty and Contribution (all reviewers)\", \"Concern: Limited practical value, especially given comparable or superior alternatives like pruning.\", \"Response: Authors emphasized advantages for large models and scalability, but reviewers found the claims incremental.\"]}", "{\"title\": \"reply\", \"comment\": \"Thanks for the rebuttal.\\n\\nThe reviewer appreciates the efforts made by the authors and I tend to keep my original rating for now.\"}", "{\"title\": \"Response to Reviewer NBjZ (Part 2)\", \"comment\": \"__3. Response to W3__ ___(not comparing with RepVGG-style methods and other compression methods)___:\\n\\n1. __Not comparing with RepVGG-style methods__: As we have explained in the response to W2 and Q1, RepVGG-style methods are quite distinct from our methods and cannot be applied to the FFN layer in ViTs. Directly comparing with methods [1,2] in different scopes is unfair and meaningless. However, we have indeed compared against a state-of-the-art reparameterization method [7] in a similar scope in our manuscript.\\n\\n2. __Not comparing with other compression methods__: As pointed out by Reviewer wqPR that _comparing with those (model compression) methods is out of the scope of this work_, our RePaFormer method is parallel to existing model compression methods and can be adapted with them. Thus, we only focus on comparing with the state-of-the-art structural reparameterization method [7] for expediting ViTs. \\n\\nDespite different scopes, we are willing to provide comparisons between our method and representative pruning methods for ViTs.\\n\\n1. We compare with the state-of-the-art pruning method for ViT, DC-ViT [5] (CVPR24). Since DC-ViT has not released the code, we only compare the latency drop reported in the paper:\\n\\n |Model|Latency drop|Top-1 acc. (%)|\\n |:-|-|-|\\n |DC-ViT-T|-16.5%|__64.4__|\\n |RePa-ViT-T|__-24.6%__|64.3|\\n |DC-ViT-S|-16.7%|__78.6__|\\n |RePa-ViT-S|__-35.3%__|77.1|\\n |DC-DeiT-B|-16.7%|81.3|\\n |RePa-DeiT-B|__-40.3%__|__81.4__|\\n\\n2. We compare our method with other representative pruning methods for ViTs regarding computational complexity and accuracy in the table below:\\n\\n |Model|Complexity (GMACs)|Top-1 acc. (%)|\\n |:-|-|-|\\n |__DeiT-Base:__|||\\n |PatchSlimming [8]|9.8|81.5|\\n |UVC [9]|8.0|80.6|\\n |WDPruning [10]|9.9|80.8|\\n |X-pruner [11]|8.5|81.0|\\n |RePaFormer|10.6|81.4|\\n |__Swin-Base:__|||\\n |PatchSlimming [8]|9.8|81.5|\\n |UVC [9]|8.0|80.6|\\n |DIMAP2 [6]|10.2|83.4|\\n |RePaFormer|9.0|82.6|\\n\\nIt is worth noting again that __our RePaFormer is a novel direction of accelerating ViT models, which is parallel to existing compression methods and can be adopted with them simultaneously__.\\n\\n&nbsp;\\n\\n---\\n\\n__4. Response to W4__ ___(the latency improvement on dense prediction tasks is small)___:\\n\\nWe thank the Reviewer for this insightful comment. We agree that the reduced inference speed gain on dense prediction tasks is partly due to high-resolution inputs; however, we would like to kindly point out that __the key factor is the increased computational demand of tensor operations__.\\n\\n1. We note that __FFN layers still occupy a large portion of the total computational complexity__: \\n\\n Using Swin Transformer as an example, the theoretical computational complexity of a single MHSA layer is $O(4hwC^2+2M^2hwC)$ while the corresponding FFN layer complexity is $O(8hwC^2)$. Thus, only when $M^2>2C$ does the MHSA layer complexity exceed that of the FFN layer. \\n\\n However, in the Swin Transformer architecture, the window size $M$ is fixed at 7 while the channel dimension $C$ is at least 96 and increases across layers. Therefore, the FFN layers consistently represent a larger portion of the computational complexity.\\n\\n2. As the input resolution increases, __the processing time for numerous tensor operations (e.g., reshaping, copying, and concatenating) also rises significantly__. Each Swin Transformer layer operates two window-reshaping operations, one window-shifting operation and one mask-copying operation. These on-device tensor operations become non-negligible when the input resolution reaches 800$\\\\times$800.\\n\\nIn fact, this reduced speed gain on dense prediction tasks is a common challenge, as evidenced in similar work. For example, our main state-of-the-art competitor, SLAB [7], achieves even smaller speed improvements on dense prediction tasks with the same predictor than our RePaFormer.\\n\\n&nbsp;\\n\\n---\\n\\n__5. Response to W5__ ___(overlapping between Tables 1 and 2)___:\\n\\nTable 1 and Table 2 serve different purposes. \\n\\nTable 1 illustrates that our RePaFormer method is lossless post-reparameterization, assuring users that our method delivers substantial inference speed gains without accuracy drop after reparameterization in practical scenarios.\\n\\nTable 2 demonstrates that RePaFormer yields a more significant speed gain and a much narrower performance gap when the backbone model complexity increases.\\n\\n&nbsp;\\n\\n---\\n\\nWe sincerely hope our comprehensive explanations and experimental results can address the Reviewer's doubts and __respectfully request a reconsideration of the score__.\"}", "{\"comment\": \"Thank you for your additional experiments! While the results are appreciated, the improvements over the new baselines appear to be modest, particularly when factoring in the training costs of the RePa models compared to the new baselines with only $C^2$ fully-connected parameters. Additionally, the hyper-parameters of the new baselines may not be tuned, as shown in NaN for LV-ViT-M cases.\\n\\nHowever, the responses have addressed some of my questions. I have decided to increase my rating from 3 to 5.\"}", "{\"title\": \"Appreciation for Contributive Review Comments and Discussion\", \"comment\": \"Dear Reviewer wqPR,\\n\\nWe sincerely appreciate your insightful comments and suggestions, especially the recommendation to empirically prove the effectiveness of our method on large and huge ViTs. Through the discussion with you and other Reviewers, we have gained a chance to present the key advantages and application scenarios of our approach: __RePaFormer can significantly speed up large ViT models while even improving accuracy__. This insight demonstrates the practical value of RePaFormer __in accelerating large-scale models without compromising performance, making it an effective solution for large real-world applications requiring both speed and precision__.\\n\\nWe would like to share the results on ViT-Large and ViT-Huge, along with those for MLPMixer-l16 reported in the original manuscript, in the table below. Our method not only drastically reduces model size and latency but also achieves HIGHER top-1 accuracy on large models with more than 200M parameters and computational complexities exceeding 40 GMACs.\\n\\n|Model|#MParam.|Complexity (GMACs)|Throughput (img/s)|Top-1 accuracy|\\n|:-|:-|:-|:-|:-|\\n|ViT-Large|304.3|59.7|124.2|80.3%|\\n|RePaViT-Large|__178.4__ (-41.4%)|__34.9__ (-41.5%)|__207.2__ (+66.8%)|__82.0%__|\\n|ViT-Huge|632.2|124.3|61.5|80.3%|\\n|RePaViT-Huge|__369.6__ (-41.5%)|__72.6__ (-41.6%)|__103.8__ (+68.7%)|__81.4%__|\\n|MLPMixer-l16|208.2|44.6|460.0|72.3%|\\n|RePaMLPMixer-l16|__82.2__ (-60.5%)|__20.0__ (-55.2%)|__302.7__ (+89.2%)|__72.6%__|\\n\\nTo the best of our knowledge, RePaFormer is the __first novel method (orthogonal to network pruning, quantization and distillation) that achieves significant acceleration (\\\\~68%) while having positive gains in accuracy (1\\\\~2%) instead of accuracy drops, on large and huge ViTs__. Considering the unprecedented results RePaFormer is getting, we want to point out that this is a disruptive and timely innovation for the community and a significant addition to the large foundation models acceleration toolkit. Since RePaFormer can be both directly applied to larger ViT architectures and combined with other acceleration techniques such as quantization, we believe RePaFormer will catalyze further research and breakthroughs on ViT's speed and accuracy. __We strongly believe that the weight and impact of this work make it best-suited for the prestigious ICLR, and the community will benefit greatly by seeing it soon from this venue.__\\n\\nWe would like to sincerely thank you once again for your thoughtful review and constructive discussion. We hope to receive your strong support through consideration of a score increase.\\n\\nBest regards, \\nAuthors\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe sincerely appreciate your reply. __We would like to confirm whether all your concerns have been appropriately responded to and resolved.__ There are still a few days left for discussion, and we are glad to clarify any further questions to get your strong support. \\n\\nIn addition, we are actively working on an update to the manuscript and will ensure that key new results, insights, and discussions generated during the rebuttal are incorporated after the end of the rebuttal stage.\\n\\nBest regards,\\nAuthors\"}", "{\"title\": \"Kind Request for Discussion and Reconsideration of the Score\", \"comment\": \"We sincerely thank the reviewer for the valuable suggestions and insightful comments. In our response, we have carefully addressed all the raised concerns regarding 1) the advantages of our method, 2) the performance gaps on SSL tasks, 3) the performance gaps on dense prediction tasks, 4) the training cost and 5) the performance on even larger models.\\n\\nIf the reviewer has any further questions, __we are glad to join in the discussion__. Otherwise, if our responses have satisfactorily resolved the concerns, __could the reviewer reconsider the score based on our clarifications and responses?__\"}", "{\"title\": \"Kind Request for Rebuttal Discussion or Reconsideration of the Score\", \"comment\": \"We thank Reviewer ufAp for the valuable feedback and insightful suggestions, which have helped us refine and clarify our work. We have carefully addressed all the raised concerns in our response.\\n\\nWe would greatly appreciate it if Reviewer ufAp could provide further feedback. Your input is invaluable to ensuring the quality and clarity of our work. Or, __if our responses have satisfactorily resolved the concerns, we respectfully request reconsideration of the score based on the clarifications and improvements provided__.\"}", "{\"summary\": \"This work proposes a reparameterization technique for vision transformers (ViTs) to improve their test-time efficiency. It achieves this by leaving some channels idle, which are not passed through activation functions and can thus be merged at inference time. Experiments show that the proposed method can notably reduce the latency of ViT-based classification models at the cost of some accuracy loss.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper is clearly written and easy to follow.\\n\\n2. The proposed method is motivated by a comprehensive latency analysis.\\n\\n3. Experiments demonstrate that the proposed method significantly improves the throughput of ViT-based classification models.\", \"weaknesses\": \"1. The main concern with this paper is the significant accuracy drop induced by the reparameterization. As shown in Table 2, the throughput improvement comes at the cost of a substantial accuracy loss, such as -7.9% on DeiT-Tiny and -6.7% on PoolFormer-s12. It appears that the proposed method scales poorly to smaller models, and simpler compression techniques like pruning might be a better option for model acceleration.\\n\\n2. Another major concern is the lack of analysis and comparison with other reparameterization strategies. Specifically, it is unclear why the proposed method is preferable to RepVGG-style multi-branch reparameterization, as leaving some channels idle without passing through activation functions can be considered a special case of a dual-branch structure. The authors should analyze the underlying reasons and key differences that make the proposed method distinct.\\n\\n3. The experimental benchmarks are insufficient. A comparison with (1) vanilla reparameterization techniques (e.g., RepVGG-style multi-branch structure) and (2) other compression methods that offer different accuracy-efficiency trade-offs should be included.\\n\\n4. The latency improvement on dense prediction tasks is small, potentially because FFNs occupy a smaller portion of the runtime for high-resolution inputs.\\n\\n5. Minor: Tables 1 and 2 have considerable overlap. Retaining only Table 2 should be sufficient.\", \"questions\": \"My questions and concerns are listed in the weakness section. My main question is why the proposed method is a better choice than RepVGG-style multi-branch reparameterization.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Kind Request for Discussion and Reconsideration of the Score\", \"comment\": \"We sincerely thank the reviewer for the valuable suggestions and insightful comments. In our response, we have carefully addressed all the raised concerns regarding 1) the effect of BatchNorm, and 2) the comparison with the reparameterized model trained from scratch.\\n\\nIf the reviewer has any further questions, __we are glad to join in the discussion__. Otherwise, if our responses have satisfactorily resolved the concerns, __could the reviewer reconsider the score based on our clarifications and responses?__\"}", "{\"comment\": \"Dear Reviewer ufAp,\\n\\nWe sincerely thank you for your continued engagement in the discussion. In our previous response, we made every effort to address your further concerns and to clarify the key contributions and significance of our work, including:\\n\\n* Explaining that the accuracy gain of our method is NOT moderate compared to the baseline, and the trade-off on training time is worthwhile.\\n\\n* Providing additional experimental results on large ViT models, where our method achieves both improved efficiency and increased accuracy.\\n\\n* Demonstrating the transformative contribution of our work for large-scale foundation models in vision tasks.\\n\\n__We greatly appreciate your time in carefully reviewing our further response. If it satisfactorily resolves all your concerns, we would be deeply grateful for your support of our work and reconsideration of the score.__\\n\\nBest regards, \\nAuthors\"}", "{\"title\": \"Supplementary experiment results\", \"comment\": \"The comparison results are presented in the table below. The _\\\"New baseline\\\"_ refers to the setting where the channel idle mechanism is replaced by a $\\\\mathbb{R}^{C\\\\times C}$ linear projection weight with BatchNorm during training. The _\\\"Ours\\\"_ refers to the RePaFormer models trained with $6C^2$ linear projection weights. All the models are trained using the same training recipes as their corresponding RePaFormer variants.\\n\\nTo provide a comprehensive comparison, we report both the pre- and post-reparameterization results during inference. It is important to note that __the post-reparameterization structure of _\\\"New baseline\\\"_ models should be the same as that of our RePaFormers__. \\n\\nIn addition, for fairness, we re-test all the models on a single A6000 GPU. As a result, the inference throughputs may differ slightly from those reported in the manuscript.\\n\\n|Backbone|Method|Inference Reparam.|#MParam.|Complexity (GMACs)|Throughput (img/s)|Top-1 accuracy|\\n|:-|:-|:-|:-|:-|:-|:-|\\n|__DeiT-Tiny:__||||||\\n||New baseline|\\u00d7|__3.5__|__0.8__|3993.4|__64.3%__|\\n||New baseline|\\u221a|__3.5__|__0.8__|__4328.0__|__64.3%__|\\n||Ours|\\u221a|__3.5__|__0.8__|__4328.0__|64.2%|\\n|__DeiT-Small:__||||||\\n||New baseline|\\u00d7|__13.2__|__2.9__|1774.2|75.7%|\\n||New baseline|\\u221a|__13.2__|__2.9__|__1983.5__|75.7%|\\n||Ours|\\u221a|__13.2__|__2.9__|__1983.5__|__77.1%__|\\n|__DeiT-Base:__||||||\\n||New baseline|\\u00d7|__51.1__|__10.6__|589.8|80.6%|\\n||New baseline|\\u221a|__51.1__|__10.6__|__654.6__|80.6%|\\n||Ours|\\u221a|__51.1__|__10.6__|__654.6__|__81.4%__|\\n|__ViT-Large:__||||||\\n||New baseline|\\u00d7|__178.4__|__34.9__|188.9|80.3%|\\n||New baseline|\\u221a|__178.4__|__34.9__|__199.7__|80.3%|\\n||Ours|\\u221a|__178.4__|__34.9__|__199.7__|__82.0%__|\\n|__Swin-Tiny:__||||||\\n||New baseline|\\u00d7|__17.5__|__2.6__|966.3|78.0%|\\n||New baseline|\\u221a|__17.5__|__2.6__|__1021.9__|78.0%|\\n||Ours|\\u221a|__17.5__|__2.6__|__1021.9__|__78.5%__|\\n|__Swin-Small:__||||||\\n||New baseline|\\u00d7|__29.9__|__5.1__|595.2|79.1%|\\n||New baseline|\\u221a|__29.9__|__5.1__|__654.6__|79.1%|\\n||Ours|\\u221a|__29.9__|__5.1__|__654.6__|__81.6%__|\\n|__Swin-Base:__||||||\\n||New baseline|\\u00d7|__52.8__|__9.0__|417.0|80.3%|\\n||New baseline|\\u221a|__52.8__|__9.0__|__452.8__|80.3%|\\n||Ours|\\u221a|__52.8__|__9.0__|__452.8__|__82.6%__|\\n|__LV-ViT-S:__||||||\\n||New baseline|\\u00d7|__19.1__|__4.7__|1031.0|81.3%|\\n||New baseline|\\u221a|__19.1__|__4.7__|__1125.6__|81.3%|\\n||Ours|\\u221a|__19.1__|__4.7__|__1125.6__|__81.6%__|\\n|__LV-ViT-M:__||||||\\n||New baseline|\\u00d7|__40.1__|__8.8__|582.8|NaN (collapse)|\\n||New baseline|\\u221a|__40.1__|__8.8__|__643.6__|NaN (collapse)|\\n||Ours|\\u221a|__40.1__|__8.8__|__643.6__|__83.5%__|\", \"in_conclusion\": [\"__Training with $6C^2$ parameters generally achieves better performance than training with $C^2$ parameters.__\", \"__The benefit of training with $6C^2$ parameters signifies as the model size increases.__\", \"__Inference-time reparameterization of BatchNorm and shortcut improves the throughput.__\", \"We hope our experiment results can address the reviewer's concerns and respectfully request a reconsideration of the score.\"]}", "{\"title\": \"Another Respectful Request for Discussion and Reconsideration of the Score\", \"comment\": \"Dear Reviewer nzL3,\\n\\nWe sincerely appreciate your time and effort in reviewing our work, especially pointing out that ___the problem studied in this paper is critical___. In the previous rebuttal, we have made every effort to appropriately address all your concerns. \\n\\nAs for your question on ___is there any workaround to avoid using BatchNorm___, we would like to clarify that __LayerNorm is specific to the input feature and is therefore not structurally reparameterizable.__ To enable the reparameterization of both the normalization layer and shortcut, an input-agnostic normalization method (e.g., BatchNorm) is needed. However, exploring ways to reparameterize LayerNorm can be an interesting direction for future research.\\n\\nIn the end, we kindly request your feedback on our rebuttal. We are willing to answer any further questions during the remaining discussion period.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer ufAp\", \"comment\": \"We thank Reviewer ufAp for the valuable comments concerning the two major designs of our RePaFormer models. We would like to provide clarification on these points below:\\n\\n---\\n\\n__1. Response to W1__ ___(whether BatchNorm alone could improve the test performance)___:\\n\\nThis is a good question. In fact, a previous work [1] has already investigated leveraging BatchNorm instead of LayerNorm in ViT and discovered that directly applying BatchNorm in ViT would lead to irregular training crashes. To address this, [1] proposes a novel approach (FFNBN) to enable training ViT with BatchNorm. However, all the experiment results on both DeiT and Swin Transformer demonstrate that __BatchNorm consistently yields worse performance than LayerNorm on ViTs__. We cite the results from [1] in the table below:\\n\\n|Model|Normalization|Top-1 acc|\\n|:-|-|-|\\n|DeiT-S|LayerNorm|__79.8%__|\\n|DeiT-S|BatchNorm+FFNBN|78.8%|\\n|Swin-T|LayerNorm|__81.2%__|\\n|Swin-T|BatchNorm+FFNBN|80.9%|\\n|Swin-S|LayerNorm|__83.0%__|\\n|Swin-S|BatchNorm+FFNBN|82.8%|\\n|Swin-B|LayerNorm|__83.3%__|\\n|Swin-B|BatchNorm+FFNBN|83.1%|\\n\\n---\\n\\n__2. Response to W2__ ___(whether channel idle mechanism can be replaced by a single learnable linear function)___:\\n\\nWe thank the Reviewer for this insightful comment and believe this comment aligns with Reviewer nzL3's W2. We would like to interpret this weakness as comparing the RePaFormers: \\n \\n 1. trained with $6C^2$ weights and then reparameterized into $C^2$\\n 2. trained with a single $C^2$ linear projection weight from scratch\\n\\nThe table below shows the results of RePa-DeiT, RePa-Swin and RePa-LV-ViT when trained with a single $C^2$ linear projection weight. The training settings are the same as their counterparts outlined in the manuscript. After reparameterizing the BatchNorm, the differences in the inference speed and the number of parameters for these two cases should be negligible, so we only report their accuracies. The _NaN_ values represent that the training crashes. \\n\\n|Backbone|$6C^2$ weights + channel idle|$C^2$ linear projection weight|\\n|:-|-|-|\\n|RePa-DeiT-Tiny|__64.3__|59.6|\\n|RePa-DeiT-Small|__77.1__|75.0|\\n|RePa-DeiT-Base|__81.4__|_NaN_|\\n|RePa-Swin-Tiny|__78.5__|77.1|\\n|RePa-Swin-Small|__81.6__|79.3|\\n|RePa-Swin-Base|__82.6__|79.6|\\n|RePa-LV-ViT-S|__81.6__|_NaN_|\\n|RePa-LV-ViT-M|__83.6__|_NaN_|\\n\\nAs shown in the table, training with $6C^2$ parameters and subsequently reparameterizing them into $C^2$ __consistently achieves better performance__, in line with the findings in [2,3,4] that train-time overparameterization improves the performance.\\n\\n---\", \"in_conclusion\": \"* __Purely using BatchNorm alone does not enhance ViT testing performance.__\\n* __Relying solely on a single linear projection for the channel idle process results in lower performance.__\\n\\nThe above discussions and experimental results will be included in the revised version. We hope our explanation can resolve the Reviewer's concerns and __respectfully request consideration for a score increase__.\\n\\n&nbsp;\\n\\n[1] Yao, Zhuliang, et al. \\\"Leveraging batch normalization for vision transformers.\\\" ICCV, 2021.\\n\\n[2] Vasu, Pavan Kumar Anasosalu, et al. \\\"FastViT: A fast hybrid vision transformer using structural reparameterization.\\\" ICCV, 2023.\\n\\n[3] Vasu, Pavan Kumar Anasosalu, et al. \\\"Mobileone: An improved one millisecond mobile backbone.\\\" CVPR, 2023.\\n\\n[4] Ding, Xiaohan, et al. \\\"Repvgg: Making vgg-style convnets great again.\\\" CVPR, 2021.\"}", "{\"title\": \"Response to Reviewer NBjZ (Part 3)\", \"comment\": \"[1] Vasu, Pavan Kumar Anasosalu, et al. \\\"FastViT: A fast hybrid vision transformer using structural reparameterization.\\\" ICCV, 2023.\\n\\n[2] Wang, Ao, et al. \\\"Repvit: Revisiting mobile cnn from vit perspective.\\\" CVPR, 2024.\\n\\n[3] Rao, Yongming, et al. \\\"Dynamicvit: Efficient vision transformers with dynamic token sparsification.\\\" NeurIPS, 2021.\\n\\n[4] Xu, Yifan, et al. \\\"Evo-vit: Slow-fast token evolution for dynamic vision transformer.\\\" AAAI, 2022.\\n\\n[5] Zhang, Hanxiao, Yifan Zhou, and Guo-Hua Wang. \\\"Dense Vision Transformer Compression with Few Samples.\\\" CVPR, 2024.\\n\\n[6] He, Yang, and Joey Tianyi Zhou. \\\"Data-independent Module-aware Pruning for Hierarchical Vision Transformers.\\\" ICLR, 2024.\\n\\n[7] Guo, Jialong, et al. \\\"SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-parameterized Batch Normalization.\\\" ICML, 2024.\\n\\n[8] Tang, Yehui, et al. \\\"Patch slimming for efficient vision transformers.\\\" CVPR, 2022.\\n\\n[9] Yu, Shixing, et al. \\\"Unified visual transformer compression.\\\" ICLR, 2022.\\n\\n[10] Yu, Fang, et al. \\\"Width & depth pruning for vision transformers.\\\" AAAI, 2022.\\n\\n[11] Yu, Lu, and Wei Xiang. \\\"X-pruner: explainable pruning for vision transformers.\\\" CVPR, 2023.\"}", "{\"title\": \"Response to Reviewer wqPR (Part 2)\", \"comment\": \"__2. Response to W2__ ___(the performance gap with self-supervised baselines is larger)___:\\n\\nWe thank the Reviewer for this insightful comment. However, applying our method to self-supervised learning models is an extended exploration included in this paper. The primary purpose of this experiment is to validate the key characteristic of our model\\u2014__the inference speed gain increases and the accuracy gap narrows as the model size grows__\\u2014still holds for self-supervised learning. Thus, we simply train our RePaFormers with the same self-supervised training schema as the baselines without any optimizations. We acknowledge the importance of employing our method in self-supervised learning baselines and plan to explore this direction in-depth in our future work, with a specific focus on generalizability as suggested.\\n\\nAfter all, the main focus of our work still remains on supervised learning, where our method achieves significant success. This alone already represents a substantial contribution of our approach.\\n\\n&nbsp;\\n\\n---\\n\\n__3. Response to W3__ ___(the performance gap at dense prediction tasks is non-negligible)___:\", \"we_interpret_this_comment_as_asking__why_the_performance_gap_on_dense_prediction_tasks_is_very_different_from_the_performance_gap_on_classification_task_\": \"1. First, we kindly argue that dense prediction tasks and classification tasks use distinct evaluation metrics\\u2014mean average precision (mAP) for dense prediction and accuracy for classification. __Directly comparing the different metric gaps is thus inherently unfair__.\\n\\n2. Second, even if such a comparison is made, __the performance gaps for dense prediction tasks align closely with those for classification tasks__. Specifically, the accuracy gaps for RePa-Swin-Small and RePa-Swin-Base on classification tasks are 1.4% and 0.9%, respectively. Similarly, the mAP gaps for RePa-Swin-Small and RePa-Swin-Base on object detection tasks using Mask R-CNN are 0.019 and 0.01, respectively. These results indicate that the performance gaps are comparable, and the trend is consistent: larger models exhibit narrower performance gaps.\\n\\n3. Additionally, when incorporating RetinaNet as the dense predictor, our method achieves even higher mAP on the object detection task, further validating its effectiveness.\\n\\nHowever, if this comment is asking _why the speed improvement reduces on dense prediction tasks compared to that on the classification task_: \\n\\n1. First, we note that __FFN layers still occupy a large portion of the total computational complexity__. Using the Swin Transformer as an example, the theoretical computational complexity of a single MHSA layer is $O(4hwC^2+2M^2hwC)$ while the corresponding FFN layer complexity is $O(8hwC^2)$. Thus, only when $M^2>2C$ does the MHSA layer complexity exceed that of the FFN layer. \\n\\n However, in the Swin Transformer architecture, the window size $M$ is fixed at 7 while the channel dimension $C$ is at least 96 and increases across layers. Therefore, the FFN layers consistently represent a larger portion of the theoretical computational complexity.\\n\\n2. __The key factor is the increased computational demand of tensor operations__. In fact, as the input resolution increases in dense prediction tasks, the processing time for several tensor operations (e.g., reshaping, copying, and concatenating) also rises significantly. For instance, each Swin Transformer layer operates two window-reshaping operations, one window-shifting operation and one mask-copying operation, which become non-negligible when the input resolution reaches 800$\\\\times$800.\\n\\n This reduced speed gain on dense prediction tasks is a common challenge, as evidenced in similar work. For example, our main state-of-the-art competitor, SLAB [6], achieves even smaller speed improvements on dense prediction tasks with the same predictor than our RePaFormer.\\n\\n&nbsp;\\n\\n---\\n\\n__4. Response to Q1__ ___(what would be the training time of the proposed method compared to the vanilla version)___:\", \"we_have_provided_the_estimated_training_cost_of_each_model_on_the_imagenet_1k_dataset_using_16_nvidia_h100_gpus_in_terms_of_the_gpu_hours_in_the_table_below\": \"|Backbone | Vanilla | RePaFormer|\\n|:-|-|-|\\n|DeiT-Tiny|60.0|72.3|\\n|DeiT-Small |70.7|90.0|\\n|DeiT-Base |93.3|123.3|\\n|Swin-Tiny |117.3|139.3|\\n|Swin-Small |180.0|215.7|\\n|Swin-Base |210.7|272.0|\\n|LV-ViT-S |159.3|189.3|\\n|LV-ViT-M |209.7|250.3|\\n|PoolFormer-s12 |74.7|86.7|\\n|PoolFormer-s24 |132.0|150.7|\\n|PoolFormer-s36 |186.7|216.0|\\n|MLPMixer-b16 |112.0|138.0|\\n|MLPMixer-l16 |233.3|298.7|\\n\\nThe BatchNorm in RePaFormers introduces an increased synchronization time between devices during training, leading to 16~28% longer training time on the ImageNet-1K dataset. However, the training overhead is still less significant compared to distillation methods, which can increase the training time by more than 40%.\"}", "{\"comment\": \"Dear Reviewer NBjZ,\\n\\nWe sincerely thank you for your continued engagement in the discussion. In our previous response, we made every effort to address your further concerns and to clarify the key contributions and significance of our work, including:\\n\\n* Providing additional experimental results on large ViT models, where our method achieves both improved efficiency and increased accuracy.\\n\\n* Demonstrating the transformative contribution of our work for large-scale foundation models in vision tasks.\\n\\n* Summarizing the guidelines for employing our method on different model architectures and different model sizes.\\n\\n __We greatly appreciate your time in carefully reviewing our further response. If it satisfactorily resolves all your concerns, we would be deeply grateful for your support of our work and reconsideration of the score.__\\n\\nBest regards, \\nAuthors\"}", "{\"comment\": \"I appreciate the authors' detailed responses addressing some of my concerns. I have raised my rating from 3 to 5 accordingly.\\n\\nCurrent results regarding the comparison with network pruning are a bit weak. I do not see clear advantages of the proposed method over network pruning, especially given that the proposed method is much more complicated. Although combining network pruning with the proposed method is possible, I am unsure if it can work without experimental results. Given the current results, I am unclear about the practical value of the proposed method.\"}", "{\"title\": \"Response to Reviewer ufAp's Further Concerns (Part 2)\", \"comment\": \"In addition to resolving the above concerns, we are excited to share the significant contribution and practical value of our RePaFormer with Reviewer ufAp.\\n\\nThrough discussions with you and other Reviewers, and especially inspired by Reviewer wqPR, we have gained a chance to present the key advantages and application scenarios of our approach: __RePaFormer can significantly speed up large ViT models while even improving accuracy__. This insight demonstrates the __practical value of RePaFormer in accelerating large-scale models without compromising performance, making it an effective solution for large real-world applications requiring both speed and precision__.\\n\\nTo empirically validate the practicality above, we conducted additional experiments on large vanilla ViTs, following Reviewer wqPR's kind suggestion. Both the vanilla and RePaFormer variants of ViT-Large and ViT-Huge are trained from scratch on the ImageNet-1k dataset using the same training recipes with the idle ratio set to 75% by default. The new results, along with those for MLPMixer-l16 reported in the original manuscript, are shown in the table below:\\n\\n|Model|#MParam.|Complexity (GMACs)|Throughput (img/s)|Top-1 accuracy|\\n|:-|:-|:-|:-|:-|\\n|ViT-Large|304.3|59.7|124.2|80.3%|\\n|RePaViT-Large|__178.4__ (-41.4%)|__34.9__ (-41.5%)|__207.2__ (+66.8%)|__82.0%__|\\n|ViT-Huge|632.2|124.3|61.5|80.3%|\\n|RePaViT-Huge|__369.6__ (-41.5%)|__72.6__ (-41.6%)|__103.8__ (+68.7%)|__81.4%__|\\n|MLPMixer-l16|208.2|44.6|460.0|72.3%|\\n|RePaMLPMixer-l16|__82.2__ (-60.5%)|__20.0__ (-55.2%)|__302.7__ (+89.2%)|__72.6%__|\\n\\nWe are thrilled to emphasize that our method not only drastically reduces model size and latency but also achieves HIGHER top-1 accuracy on large models with more than 200M parameters and computational complexities exceeding 40 GMACs. For instance, RePaViT-Large achieves a 1.7% higher top-1 accuracy (82.0% vs 80.3%) while delivering a 66.8% speed gain (207.2 images/second vs 124.2 images/second) compared to the vanilla ViT-Large. __This demonstrates a transformative contribution, as many practical large-scale foundation models for computer vision tasks rely on vanilla ViT as their backbone, such as CLIP [1], SAM [2] and ViT-22B [3].__\\n\\nTo the best of our knowledge, RePaFormer is the first novel method that achieves significant acceleration (\\\\~68%) while having positive gains in accuracy (1\\\\~2%) instead of accuracy drops, on large and huge ViTs. Considering the unprecedented results RePaFormer is getting, we want to point out that this is a disruptive and timely innovation for the community and a significant addition to the large foundation models acceleration toolkit. Since RePaFormer can be both directly applied to larger ViT architectures and combined with other acceleration techniques such as quantization, we believe RePaFormer will catalyze further research and breakthroughs on ViT's speed and accuracy. __We strongly believe that the weight and impact of this work make it best-suited for the prestigious ICLR, and the community will benefit greatly by seeing it soon from this venue.__\\n\\nWe hope our response and plenty of additional experimental results can address all your concerns and demonstrate the significance of our contributions. __We kindly request your strong support by considering a score increase__.\\n\\n&nbsp;\\n\\n[1] Radford, Alec, et al. \\\"Learning transferable visual models from natural language supervision.\\\" ICML, 2021.\\n\\n[2] Kirillov, Alexander, et al. \\\"Segment anything.\\\" ICCV, 2023.\\n\\n[3] Dehghani, Mostafa, et al. \\\"Scaling vision transformers to 22 billion parameters.\\\" ICML, 2023.\"}", "{\"comment\": \"Sorry for the late reply and thank you for your response and additional experiments!\\n\\nIt seems that in the new setting presented, the Rep. Linear 3 does not have any normalization. (Please correct me if I am wrong.) However, as I mentioned in my initial review, I expected a baseline similar to the form of Figure (b)\\u2014specifically Linear projection 3 with a BatchNorm layer but without BatchNorm inference reparameterization. Therefore, I would suggest conducting experiments where Linear projection 3 has its own BatchNorm layer during training.\\n\\nWould it be possible to provide results for such a baseline?\"}", "{\"summary\": \"This paper presents a new method for accelerating FFN layers in MetaFormer-structured architectures. The core idea is to combine structured reparameterization and partial channel skipping. Experiments are done on ImageNet classification, self-supervised learning, and dense prediction tasks. The proposed method can accelerate various ViT models with some accuracy drop.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method is interesting and technically sound.\\n2. The problem studied in this paper is critical, as FFN is a big efficiency bottleneck for ViTs.\\n3. I appreciate seeing results outside ImageNet classification.\", \"weaknesses\": \"1. This paper lacks direct comparisons with network pruning.\\n2. This paper lacks an essential baseline, training Figure 1 (c) from scratch. \\n3. This paper lacks direct comparisons with previous structured reparameterization methods in previous works (e.g., FastViT's design). \\n4. According to the results, the proposed method still suffers from accuracy drops.\", \"questions\": \"It seems the proposed method has to be used with BatchNorm. Is there any workaround to avoid using BatchNorm?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Kind Request for Discussion and Reconsideration of the Score\", \"comment\": \"We sincerely thank the reviewer for the valuable suggestions and insightful comments. In our response, we have carefully addressed all the raised concerns regarding 1) the comparison with network pruning, 2) the comparison with the reparameterized model trained from scratch, 3) the comparison with FastViT and 4) the accuracy drop confusion.\\n\\nIf the reviewer has any further questions, __we are glad to join in the discussion__. Otherwise, if our responses have satisfactorily resolved the concerns, __could the reviewer reconsider the score based on our clarifications and responses?__\"}", "{\"title\": \"Response to Reviewer nzL3's Further Concerns (Part 1)\", \"comment\": \"We sincerely appreciate Reviewer nzL3 for carefully considering our rebuttal responses and for raising the score. We are pleased to hear that all other concerns have been appropriately addressed, except for the practical value of our method. The further suggestion to clearly articulate the advantages of our method over network pruning, as well as its practical value, is greatly appreciated and will further enhance the clarity and impact of our contributions.\\n\\nFirstly, __we respectfully disagree with the statement in your reply that _\\\"the proposed method is much more complicated (than network pruning)\\\"___. On the contrary, our method is both simple and effective. Unlike network pruning, which requires a detailed analysis of the network structure and parameter redundancy, our approach only involves straightforward modifications: replacing LayerNorm with BatchNorm and keeping certain channels inactivated in the FFN layer. Additionally, our method can be easily integrated into MetaFormer-structured models without significant architectural changes. Therefore, we conclude that __our method is generally simpler to implement and more adaptable than network pruning__.\", \"we_would_like_to_show_pseudocodes_in_pytorch_style\": \"```python\\nclass Mlp(nn.Module):\\n def __init__(self, dim_in, dim_hidden, dim_out, act_layer, idle_ratio=0.75):\\n super().__init__()\\n self.norm1 = nn.BatchNorm1d(dim_in)\\n self.fc1 = nn.Linear(dim_in, dim_hidden)\\n self.norm2 = nn.BatchNorm1d(dim_hidden)\\n self.fc2 = nn.Linear(dim_hidden, dim_out)\\n self.act = act_layer()\\n self.idle_channels = int(dim_hidden * idle_ratio)\\n\\n def forward(self, x):\\n x = self.norm1(x.transpose(-1,-2)).transpose(-1, -2)\\n x = self.fc1(x)\\n\\n # Activation with channel idle mechanism\\n mask = torch.zeros_like(x, dtype=torch.bool)\\n mask[:, :, self.idle_channels:] = True\\n x = torch.where(mask, self.act(x), x)\\n\\n x = self.norm2(x.transpose(-1,-2)).transpose(-1, -2)\\n x = self.fc2(x)\\n return x\\n```\\n\\nSecondly, through discussions with you and other Reviewers, and especially inspired by Reviewer wqPR, we have gained a chance to further present the key advantages and application scenarios of our approach: __RePaFormer can significantly speed up large ViT models while even improving accuracy__. This insight demonstrates the __practical value of RePaFormer in accelerating large-scale models without compromising performance, making it an effective solution for large real-world applications requiring both speed and precision__.\\n\\nTo empirically validate the practicality above, we conducted additional experiments on large vanilla ViTs, following Reviewer wqPR's kind suggestion. Both the vanilla and RePaFormer variants of ViT-Large and ViT-Huge are trained from scratch on the ImageNet-1k dataset using the same training recipes with the idle ratio set to 75% by default. The new results, along with those for MLPMixer-l16 reported in the original manuscript, are shown in the table below:\\n\\n|Model|#MParam.|Complexity (GMACs)|Throughput (img/s)|Top-1 accuracy|\\n|:-|:-|:-|:-|:-|\\n|ViT-Large|304.3|59.7|124.2|80.3%|\\n|RePaViT-Large|__178.4__ (-41.4%)|__34.9__ (-41.5%)|__207.2__ (+66.8%)|__82.0%__|\\n|ViT-Huge|632.2|124.3|61.5|80.3%|\\n|RePaViT-Huge|__369.6__ (-41.5%)|__72.6__ (-41.6%)|__103.8__ (+68.7%)|__81.4%__|\\n|MLPMixer-l16|208.2|44.6|460.0|72.3%|\\n|RePaMLPMixer-l16|__82.2__ (-60.5%)|__20.0__ (-55.2%)|__302.7__ (+89.2%)|__72.6%__|\\n\\nWe are thrilled to emphasize that our method not only drastically reduces model size and latency but also achieves __HIGHER__ top-1 accuracy on large models. For instance, RePaViT-Large achieves a 1.7% higher top-1 accuracy (82.0% vs 80.3%) while delivering a 66.8% speed gain (207.2 images/second vs 124.2 images/second) compared to the vanilla ViT-Large. __This demonstrates a transformative contribution, as many practical large-scale foundation models for computer vision tasks rely on vanilla ViT as their backbone, such as CLIP [1], SAM [2] and ViT-22B [3]__.\"}", "{\"title\": \"Kind Request for Rebuttal Discussion or Reconsideration of the Score\", \"comment\": \"We thank Reviewer NBjZ for the valuable feedback and insightful suggestions, which have helped us refine and clarify our work. We have carefully addressed all the raised concerns in our response.\\n\\nWe would greatly appreciate it if Reviewer NBjZ could provide further feedback. Your input is invaluable to ensuring the quality and clarity of our work. Or, __if our responses have satisfactorily resolved the concerns, we respectfully request reconsideration of the score based on the clarifications and improvements provided__.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer nzL3 (Part 1)\", \"comment\": \"We sincerely thank Reviewer nzL3 for recognizing that ___problem studied in this paper is critical___ and ___the proposed method is interesting and technically sound___. We are honoured to hear that multiple reviewers have acknowledged the importance of the problem and the novelty of our method. Regarding the issues raised by Reviewer nzL3, we address them as follows.\\n\\n&nbsp;\\n\\n---\\n\\n__1. Response to W1__ ___(lacking comparisons with network pruning)___:\\n\\nFirst, we would like to qualitatively compare the differences between our method and pruning methods:\\n\\n* __Different motivations__: Network pruning focuses on removing redundant parameters, creating a sparse network. In contrast, our method structurally combines parameters using linear algebra operations, resulting in a condensed yet structurally regular network.\\n \\n* __Different requirements and implementation difficulties__: Network pruning methods produce sparse networks. To fully utilize the sparsity introduced by pruning, specialized hardware (e.g., accelerators that support sparse computations) or software libraries are needed. This adds complexity to deployment and maintenance. On the contrary, the reparameterized RePaFormers are structurally regular and can be efficiently adopted on general-purpose hardware without specialized support.\\n\\n* __Different generalizabilities__: Different network architectures have varying sensitivity to pruning; some models may not be suitable for pruning or may not benefit significantly from it. In most cases, dedicated training processes or pruning strategies need to be designed for different backbones. However, our RePaFormer method is generally applicable to all MetaFormer-structured models and can be seamlessly integrated into them.\\n\\nSecond, we would like to provide quantitative analysis by comparing our method with state-of-the-art pruning methods for ViTs in terms of both latency drop and accuracy in the table below: \\n\\n|Method|Latency drop|Top-1 accuracy|\\n|:-|:-|:-|\\n|__DeiT-Base:__|||\\n|PRACTISE [1]|-14.5%|79.3%|\\n|DC-DeiT-B [2]|-16.7%|81.3%|\\n|RePaFormer (ours)|-40.3%|81.4%|\\n|__Swin-Base:__|||\\n|PRACTISE [1]|-14.5%|82.8%|\\n|DC-Swin-B [2]|-16.7%|83.8%|\\n|RePaFormer (ours)|-33.1%|82.6%|\\n\\nIn addition, we further compare our method with some other representative pruning methods for ViTs regarding computational complexity and accuracy in the table below:\\n\\n|Model|Complexity (GMACs)|Top-1 accuracy|\\n|:-|-|-|\\n|__DeiT-Base:__|||\\n|PatchSlimming [3]|9.8|81.5%|\\n|UVC [4]|8.0|80.6%|\\n|WDPruning [5]|9.9|80.8%|\\n|X-pruner [6]|8.5|81.0%|\\n|RePaFormer (ours)|10.6|81.4%|\\n|__Swin-Base:__|||\\n|PatchSlimming [3]|9.8|81.5%|\\n|UVC [4]|8.0|80.6%|\\n|DIMAP2 [7]|10.2|83.4%|\\n|RePaFormer (ours)|9.0|82.6%|\\n\\nAs demonstrated in the tables above, our method achieves comparable trade-offs between accuracy and complexity/inference time to network pruning methods.\\n\\nLast, it is worth emphasizing that __our RePaFormer is parallel to network pruning methods__. As pointed out by Reviewer wqPR, _comparing with those methods is __out of the scope__ of this work_. We would not claim our approach to be more advantageous over network pruning and anticipate they can be utilized together. \\n\\nAfter all, we thank the reviewer for this suggestion and will include the comparison in our revised version.\\n\\n&nbsp;\\n\\n---\\n\\n__2. Response to W2__ ___(lacking an essential baseline)___:\\n\\n__We thank the reviewer for this contributive comment__. It is important to compare the performance of \\n\\n 1. training vanilla FFN layers and subsequently reparameterizing them into an efficient structure and\\n\\n 2. directly training reparameterized FFN layers from scratch \\n\\nTherefore, we train RePa-DeiT, RePa-Swin and RePa-LV-ViT with a single $C^2$ linear projection weight without the channel idle mechanism from scratch. The training settings are kept consistent with their counterparts outlined in the manuscript. The results are shown in the table below, where the NaN values in the table represent the training crashes.\\n\\n|Backbone|Ours|Reparameterized (Figure 1(c))|\\n|:-|-|-|\\n|RePa-DeiT-Tiny|__64.3__|59.6|\\n|RePa-DeiT-Small|__77.1__|75.0|\\n|RePa-DeiT-Base|__81.4__|_NaN_|\\n|RePa-Swin-Tiny|__78.5__|77.1|\\n|RePa-Swin-Small|__81.6__|79.3|\\n|RePa-Swin-Base|__82.6__|79.6|\\n|RePa-LV-ViT-S|__81.6__|_NaN_|\\n|RePa-LV-ViT-M|__83.6__|_NaN_|\\n\\nAs shown in the table, __directly training the reparameterized model (as illustrated in Figure 1 (c)) consistently yields a worse performance than our approach__, and occasionally results in training crash. This observation aligns with the findings in [8,9] that training an overparameterized network from scratch achieves a more robust training process and better performance than training a network with fewer parameters. Additionally, we would like to clarify that the crashes in the reparameterized models are primarily due to instability caused by the lack of normalization, leading to unregulated value fluctuations during training.\\n\\nWe will include this baseline in our revised version.\"}", "{\"title\": \"Response to Reviewer nzL3's Further Concerns (Part 2)\", \"comment\": \"To the best of our knowledge, RePaFormer is the __first novel method _(orthogonal to network pruning, quantization and distillation)_ that achieves significant acceleration (\\\\~68%) while having positive gains in accuracy (1\\\\~2%) instead of accuracy drops, on large and huge ViTs__. Considering the unprecedented results RePaFormer is getting, we want to point out that this is a disruptive and timely innovation for the community and a significant addition to the large foundation models acceleration toolkit. Since RePaFormer can be both directly applied to larger ViT architectures and combined with other acceleration techniques such as quantization, we believe RePaFormer will catalyze further research and breakthroughs on ViT's speed and accuracy. __We strongly believe that the weight and impact of this work make it best-suited for the prestigious ICLR, and the community will benefit greatly by seeing it soon from this venue.__\\n\\nWe hope our response can address all your concerns and demonstrate the significance of our contributions. We kindly request your strong support by considering a score increase.\\n\\n&nbsp;\\n\\n[1] Radford, Alec, et al. \\\"Learning transferable visual models from natural language supervision.\\\" ICML, 2021.\\n\\n[2] Kirillov, Alexander, et al. \\\"Segment anything.\\\" ICCV, 2023.\\n\\n[3] Dehghani, Mostafa, et al. \\\"Scaling vision transformers to 22 billion parameters.\\\" ICML, 2023.\"}", "{\"comment\": \"We are glad to clarify this statement.\\n\\nTo begin with, as shown in Figure 1(b) and Equation 2, there is a shortcut for each FFN layer as\\n$$\\n Z = \\\\text{FFN}(\\\\text{Norm}(Y)) + Y.\\n$$\\nHowever, the vanilla FFN incorporates LayerNorm (i.e., $\\\\text{Norm}(\\\\cdot)$=$\\\\text{LN}(\\\\cdot)$) as its normalization method, which is specific to the input feature and cannot be structurally reparameterized into linear projection weights. Consequently, the shortcut component (i.e., $+ Y$) cannot be reparameterized either. To address this, we replace LayerNorm with BatchNorm (i.e., $\\\\text{Norm}$=$\\\\text{BN}(\\\\cdot)$), enabling the reparameterization of both the normalization and the shortcut.\\n\\nNext, as suggested by the reviewer, the new baseline model should follow the structure in Figure 1(b), but with the channel idle mechanism replaced by a linear projection. This process can be expressed as:\\n\\\\begin{equation}\\n Z = \\\\text{Act}(\\\\text{BN}(Y)W^{\\\\text{1}})W^{\\\\text{2}} + \\\\text{BN}(Y)W^{\\\\text{3}} + Y,\\n\\\\end{equation}\\nwhere $W^{\\\\text{3}}\\\\in\\\\mathbb{R}^{C\\\\times C}$. __We are now directly training and testing the new baseline model as formulated above.__\\n\\nNonetheless, as the reviewer further required that _specifically Linear projection 3 with a BatchNorm layer but without BatchNorm inference reparameterization_, the shortcut $+Y$ in the above baseline model cannot be reparameterized into the projection weights $W^{\\\\text{3}}$ without BatchNorm reparameterized during inference. So we simply keep the BatchNorm and shortcut during testing, which can lead to a bit slower throughput. \\n\\nIn addition, __if the BatchNorms and shortcut in the above model are expected to be reparameterized during testing, we assure the reviewer that the accuracy will not drop after reparameterization while the inference speed and model size should be analogous to our model.__\\n\\nThe experimental results will be updated soon.\"}" ] }
EbCUbPZjM1
ReGen: Generative Robot Simulation via Inverse Design
[ "Phat Tan Nguyen", "Tsun-Hsuan Wang", "Zhang-Wei Hong", "Erfan Aasi", "Andrew Silva", "Guy Rosman", "Sertac Karaman", "Daniela Rus" ]
Simulation plays a key role in scaling robot learning and validating policies, but constructing simulations remains labor-intensive. In this paper, we introduce ReGen, a generative simulation framework that automates this process using inverse design. Given an agent's behavior (such as a motion trajectory or objective function) and its textual description, we infer the underlying scenarios and environments that could have caused the behavior. Our approach leverages large language models to construct and expand a graph that captures cause-and-effect relationships and relevant entities with properties in the environment, which is then processed to configure a robot simulation environment. Our approach supports (i) augmenting simulations based on ego-agent behaviors, (ii) controllable, counterfactual scenario generation, (iii) reasoning about agent cognition and mental states, and (iv) reasoning with distinct sensing modalities, such as braking due to faulty GPS signals. We demonstrate our method in autonomous driving and robot manipulation tasks, generating more diverse, complex simulated environments compared to existing simulations with high success rates, and enabling controllable generation for corner cases. This approach enhances the validation of robot policies and supports data or simulation augmentation, advancing scalable robot learning for improved generalization and robustness.
[ "generative simulation", "robot", "autonomous driving", "large language model", "inverse design" ]
Accept (Poster)
https://openreview.net/pdf?id=EbCUbPZjM1
https://openreview.net/forum?id=EbCUbPZjM1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yeUwITDALH", "xHEnqNEz5f", "u7LPNK5Jh5", "szCPcCMWJt", "sv7JorZXQs", "qr9jv4yAWd", "poYjQMNOH1", "m7mQmN9e0W", "h3t070owOd", "fb450mWbM1", "cdJv7h6CQt", "caachXAqqQ", "cICOeUtFYi", "YSzWzSrZhZ", "Y1kxnfQNBk", "VagXvy0jAQ", "VAZkdwKfkR", "ULEPR7IsaX", "UF7rOdgNIW", "RsUy98k0Uc", "PFC0y6kSni", "OO6uIiCA8R", "MlcmoDhoIN", "LpjVTaWIOy", "KBRjyf3ONk", "FgkQ4UQ8SM", "DGiCd6e2RP", "Cnkvv5rGMn", "8NMQ4E5xdp", "7hbFfGq8dS", "7faunUTleb", "5QAVceyDxE", "5J4fyy9DTN", "3EAEvf7nlY" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "decision" ], "note_created": [ 1733211440684, 1732884840358, 1733187877519, 1732786616701, 1733205510773, 1732781781242, 1733069560501, 1730516624146, 1732805279530, 1732786212346, 1732805947190, 1732917719442, 1733198009165, 1732916641898, 1733202564212, 1733148798619, 1730711814603, 1733068962789, 1730623833801, 1733149340465, 1733208690452, 1730625265272, 1732783559443, 1733143041859, 1733137334943, 1733195936944, 1732917340426, 1733069256106, 1733069879180, 1734445892018, 1733201682458, 1732916893372, 1732804861468, 1737524137992 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Reviewer_hjAh" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Reviewer_x8VP" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Reviewer_x8VP" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Reviewer_q9yg" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Reviewer_x8VP" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Reviewer_x8VP" ], [ "ICLR.cc/2025/Conference/Submission11673/Reviewer_HrM2" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Reviewer_q9yg" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Area_Chair_PWRP" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Submission11673/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Response to reviewer x8VP\", \"comment\": \"Dear Reviewer x8VP,\\n\\n> Training a policy usually is time consuming. What is the actual time complexity for validating each newly generated environment (e.g., validation time in minutes or hours per environment)?\\n\\nFor manipulation tasks, validation typically takes less than two minutes. For driving scenarios, we empirically observed that it takes approximately 5-15 minutes per environment. This is primarily because each validation requires running a simulation, which takes ~0.5 seconds using CARLA. Notably, in our proposed problem setting, the trajectory of the ego-vehicle is already known, offering opportunities for optimization. However, these optimizations were not implemented in the current work and remain part of our planned future improvements \\n\\n> As it stands, the work risks being categorized as primarily LLM prompt engineering - an increasingly common approach that, while useful, may not meet the threshold for scientific/theoretical contribution in this field.\\n\\nThe primary role of the LLM in our method is to generate diverse, plausible scenarios. Changing models, adjusting hyperparameters, or relying on prompt-engineering methods like ChatScene, which uses GPT-4, result in only marginal improvements in scenario diversity. This is because these methods are limited by the context specified in their prompts, restricting the scope of generated scenarios. In contrast, our method expands the range of possible scenarios by proposing new, potentially unrelated contexts, which are then validated for plausibility using an LLM as a classifier, resulting in significantly greater diversity (Table 1-2), enabling us to simulate corner-case scenarios for safety-critical applications.\\n\\nOther reviewers, hjAh and HrM2, raised similar concerns but ultimately expressed that their concerns were thoroughly addressed in the rebuttal and appropriately reflected in the revised manuscript. \\n\\nBest,\\n\\nAuthors.\"}", "{\"title\": \"General Response\", \"comment\": [\"We thank all reviewers for their thoughtful and constructive feedback. We are encouraged to hear the reviewers,\", \"find ReGen\\u2019s core functionality appealing (Reviewer hjAh) as it addresses an important research problem (Reviewers HrM2, q9yg) that will improve the robustness and safety of policies under corner cases (Reviewers q9yg, HrM2, x8VP)\", \"great experiments to show strong capability (Reviewer hjAh) and comprehensive evaluation against multiple baselines (Reviewer x8VP) and thorough implementation across multiple domains (Reviewers x8VP, q9yg)\", \"In response to the feedback, we have provided individual responses to address each reviewer's remaining concerns. Additionally, we have updated the manuscript, with all changes highlighted in red to enhance clarity and address any missing details. Below, we summarize the added experiments and revisions to the paper.\", \"Added extensive information on prompts, examples, and code, including thorough demonstrations of the entire process for constructing the simulation environment (Reviewers HrM2, x8VP, hjAh)\", \"Revised the method description for greater clarity (Reviewers x8VP, HrM2, hjAh), incorporating details such as model versions and hyperparameter settings (Reviewer HrM2).\", \"Conducted a comprehensive analysis to enhance the depth of discussion, addressing why our method outperforms the baseline (Reviewer HrM2), its handling of failure modes (Reviewer x8VP), and its novel contributions (Reviewer hjAh).\", \"Performed ablation experiments to clearly demonstrate method\\u2019s novelty and contribution (Reviewer hjAh)\", \"For more details, please refer to our individual responses. We extend our gratitude to all reviewers for their time and valuable feedback. Please do not hesitate to share any further comments or suggestions.\"]}", "{\"title\": \"Rebuttal Ending Soon - Could you let us know your feedback and revisit evaluation\", \"comment\": \"Dear Reviewer x8VP,\\n\\nThank you for your valuable feedback, which has significantly contributed to improving the clarity and quality of our paper.\\n\\nWe are pleased that our rebuttal has addressed the concerns raised by all other reviewers and hope it has similarly resolved any questions you may have. Could you let us know your feedback and consider revisiting your evaluation if you feel your concerns have been adequately addressed? Thank you!\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Pt 2\", \"comment\": \"> Could you provide more details about the specific prompting strategies used for graph expansion?\\n\\nThank you for your feedback. We have added comprehensive prompt examples for LLMs in Appendix 5. Specifically, Appendix 5.1 contains prompts for node proposals, and Appendix 5.2 includes prompts for edge creation. \\nAdditionally, detailed examples are provided in Appendix 1.2 for nodes, covering event, entity, and property nodes, and in Appendix 1.3 for edge proposals, including event-to-event, entity-to-event, and property-to-entity connections. Full output examples can also be found in Appendix 5. For code examples, refer to Appendix A1.1 for the asset database, A2 for the finite state machine and the full scenario configuration example.\\n\\n> How do you ensure consistency and reliability in the LLM output, and how do you handle cases where the LLM generates invalid or inconsistent relationships? \\n\\nWe set both the temperature and top-p to 0 to increase the consistency in the LLM output, which empirically yields more deterministic results. Our ablation experiment (Table 4) demonstrated minimal variations in its responses.\\n\\n> Could you provide a detailed analysis of the failure modes\\n\\nThank you for raising this point. We have updated the manuscript to include a detailed analysis of the failure modes observed in two key stages of our method: (1) the graph expansion stage (Appendix A.3, paragraph 1) and (2) the grounding into simulation (Appendix A.4.2).\\n\\n> About generated task completeness. How do you guarantee that the generated tasks are feasible for a robot or autonomous agent? i.e., that there is a practical solution to the generated tasks\\n\\nOur approach ensures task feasibility by converting each scenario generated from the graph expansion process into a finite state machine (FSM). The FSM defines a satisfiability problem, which we solve using tools like Google\\u2019s CP-SAT solver. The solver finds solutions for the variables such as the x, y coordinates of the start and end positions $(x_0, y_0, x_T , y_T)$, as well as the speed, such that it satisfies the constraints imposed by the FSM. There is a practical solution if the simulation terminates in a state that satisfies the terminal condition of the FSM. The feasibility rate for driving is 80% and manipulation is 78%.\\n\\n- Example: The FSM shown below is for the scenario \\u201cEgo-vehicle stopping \\u2190 Ambulance approaching from behind.\\u201d The FSM translates low-level trajectory into abstract states for state tracking. For example, the abstract state \\u201cAmbulance Approaching\\u201d is defined as a constraint of where the ambulance is behind the ego-vehicle and also in motion `if behind_ego(\\u201cambulance\\u201d) and is_currently_moving(\\u201cambulance\\u201d)`. Additional details on this process can be found in Appendix A.2\\n\\n```\\nfsm = [[(\\u2019ambulance1\\u2019, \\u2019Ambulance Approaching\\u2019), (\\u2019ego-vehicle\\u2019, \\u2019Ego Driving Steady\\u2019)],\\n [(\\u2019ambulance1\\u2019, \\u2019Ambulance Close to Ego\\u2019)], \\n[(\\u2019ego-vehicle\\u2019, \\u2019Ego Braking\\u2019)], \\n[(\\u2019ego-vehicle\\u2019, \\u2019Ego Stopped Abruptly\\u2019)], \\n[(\\u2019ambulance1\\u2019, \\u2019Ambulance Passing Ego\\u2019)]] \\n```\"}", "{\"title\": \"Response to reviewer x8VP\", \"comment\": \"Dear Reviewer x8VP,\\n\\nThanks for your prompt responses.\\n\\n> How did you obtain the policy? Replicate the agent behavior?\\n\\nWe define behaviors either as a trajectory or a reward function. In driving scenarios, we reuse the trajectory. For manipulation tasks, we retrain a new policy based on the reward function\\n\\n> The thing is that the problem raise from such inverse design, so how the inverse design could resolve the problem.\\n\\nThe best validation we can propose is to obtain a policy and test whether it performs effectively in the simulated environment. In such a setup, the validation that determines the correctness of generated simulated environment is to check if the generated environment indeed provides a scenario where the given behavior should have occurred. For driving tasks, we reuse a motion trajectory, while for manipulation tasks, we retrain a new policy using the defined reward function. We would greatly appreciate your thoughts on whether there are alternative validation protocols or methodologies that could further strengthen this evaluation.\\n\\n> Table 1-2 does not demonstrate the proposed method could complements existing benchmarks. \\n\\nRegarding complementing existing benchmarks, our experiments are effectively complementing Robogen. We evaluate our method by comparing it to RoboGen, using a subset of its behaviors (i.e., reward functions) that have already been benchmarked against other baselines. As demonstrated in Table 2, our method achieves greater diversity compared to both the subset we utilized and the full set of RoboGen behaviors.\\n\\nBest,\\n\\nAuthors\"}", "{\"comment\": \"We thank reviewer q9yg for their feedback and for highlighting areas that could benefit from further explanation. Below, we provide additional experimental studies and discussions to address these concerns.\\n\\n> Primarily focused on diversity and complexity, with less emphasis on the realism and physical accuracy of the generated simulations\\n\\nThank you for your feedback. We agree that realism plays a crucial role in ensuring the generated scenarios align with human expectations. We address realism through two aspects. \\n\\n- First, we ensure the scenario\\u2019s realism by validating the plausibility of cause-and-effect relationships between events. For example, a scenario is realistic only if the cause-and-effect is logical, such as an event \\\"emergency vehicle behind the ego-vehicle\\\" causing the event \\\"ego-vehicle to change lanes.\\u201d \\n\\n- Second, we conducted a preliminary human study to evaluate the physical realism of the generated simulations. In this study, participants assess whether the scenarios are both realistic and consistent with their descriptions. We will include the analysis of the results from the study in a future revision.\\n\\n> LLMs can be computationally expensive and may not always generate semantically accurate or physically plausible scenarios\\nThank you. We address this concern in two parts: \\n- Empirically, our method takes approximately 1-3 minutes to generate a scenario using GPT-4o-2024-08-06.\\n - Details: We classify multiple candidates in a single prompt, reducing the query complexity in each iteration of the graph expansion step to $O(1)$. This approach avoids the $O(n^2)$ complexity of pairwise comparisons, where $n$ is the number of candidate nodes. \\n- Second, we employ an LLM as a general classifier to evaluate the plausibility of proposed events, entities, and properties within the graph. This method ensures that each graph component adheres to the constraints of the simulation engine.\\n - Details: Our edge construction leverages an LLM as a general classifier to evaluate the plausibility of candidate connections in the context of the source node. For event-to-event edges, this process identifies causal relationships. For entity-to-event and property-to-event edges, it verifies whether the entities (e.g., an ambulance) and their properties (e.g., a siren) are supported by the simulation engine. These entities and properties are defined in the asset database (see Appendix 1.1), ensuring compatibility with the simulation of the specified event. We conducted an ablation study (more details in Appendix A.3) to evaluate the accuracy of each edge type.\\n\\n> Other domains not extensively explored\\n\\nWe appreciate the reviewer\\u2019s comment regarding the exploration of other domains. Most existing works tend to focus either on driving scenarios [A1, A2, A3] or manipulation tasks [A4, A5, A6], often neglecting cross-domain exploration. In our work, we chose driving and manipulation as they represent two important fields of interest to the robotic and simulation communities. We would welcome the reviewer\\u2019s suggestions on additional domains that could benefit from our approach or serve as compelling directions for future work.\", \"citations\": \"A1. X. Yang et al., \\u201cDriveArena: A Closed-loop Generative Simulation Platform for Autonomous Driving,\\u201d arXiv.org, 2024. https://arxiv.org/abs/2408.00415.\\n\\nA2. Zhang, Jiawei, et al. \\u201cChatScene: Knowledge-Enabled Safety-Critical Scenario Generation for Autonomous Vehicles.\\u201d ArXiv.org, 2024, arxiv.org/abs/2405.14062. \\n\\nA3. Hu, Anthony, et al. \\u201cGAIA-1: A Generative World Model for Autonomous Driving.\\u201d ArXiv.org, 2023, arxiv.org/abs/2309.17080. \\n\\nA4. Wang, Yufei, et al. \\u201cRoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation.\\u201d ArXiv.org, 2 Nov. 2023, arxiv.org/abs/2311.01455. \\n\\nA5. Wang, Lirui, et al. \\u201cGenSim: Generating Robotic Simulation Tasks via Large Language Models.\\u201d ArXiv.org, 2023, arxiv.org/abs/2310.01361. \\n\\nA6. Hua, Pu, et al. \\u201cGenSim2: Scaling Robot Data Generation with Multi-Modal and Reasoning LLMs.\\u201d ArXiv.org, 2024, arxiv.org/abs/2410.03645.\"}", "{\"title\": \"2 Days Left for Rebuttal -- We would like to know your feedback\", \"comment\": \"Dear Reviewer hjAh,\\n\\nWe wanted to follow up on our earlier response to your feedback.\\n\\nThank you once again for dedicating your time and effort to reviewing our paper. We have worked diligently to address all your concerns, including revising the manuscript and providing additional experiments to further strengthen our work.\\n\\nWith only two days remaining in the rebuttal period, we would greatly appreciate it if you could let us know if there are any remaining questions or concerns we can address.\\n\\nWe look forward to hearing from you. Thank you!\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"This paper presents ReGen, a generative simulation framework that automates the creation of robot simulation environments through inverse design. The approach leverages large language models (LLMs) to construct and expand graphs capturing cause-and-effect relationships and environmental properties, which are then converted into executable simulation environments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Appealing core functionality: leveraging inverse design with large language models to generate realistic and diverse simulation environments by capturing cause-and-effect relationships and relevant environmental properties.\", \"Great experiments to show the strong capability to generate diverse and corner-case of the scenarios.\"], \"weaknesses\": \"1. The paper's novelty analysis is insufficient:\\n- The causality reasoning and diversity aspects appear to be inherent properties of the LLMs rather than unique contributions of the framework\\n- The generation capabilities seem heavily dependent on the underlying simulator's features\\n\\n2. There is no analysis of the generated graph, only the results are shown, making it unclear whether the main contributions come from the LLM rather than the proposed graph structure. Additionally, there is no ablation study on the graph; for instance, it is unclear what would happen without edge construction.\", \"questions\": \"1. In line 161, what description is used when inferring the property node from the source node?\\n2. In line 155, is there an example that clarifies what the prior represents exactly?\\n3. This paper demonstrates a plausible generation ability; however, it lacks analysis of the entire generated graph. Could you provide the complete graph generated by ReGen?\\n4. Is there an issue with the brackets in line 182? It seems there are multie understandings here. A more precise notation or explanation is needed.\\n5. When expanding the event-to-event nodes, if the LLM generates events that the simulator cannot execute, how does the system handle this?\\n6. ReGen exhibits strong capabilities in generating diverse scenarios, yet realism is not thoroughly addressed in the large-scale experiments. Could you elaborate on the realism of the generated scenarios? If the generated scenarios are not reasonable, off-the-shelf VLM understanding may perform poorly, as observed in Table 3.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Pt 2\", \"comment\": \"> In line 161, what description is used when inferring the property node from the source node?\\n\\nProperty nodes specify attributes for each entity, including elements such as location, which is dynamically generated using an LLM (e.g. \\u201cin the left lane in front of the ego-vehicle\\u201d), or the possible states of an entity in the simulator, which are retrieved from the asset database (see Appendix 1.1\\u20131.2 for additional details). For example, candidate property nodes for a traffic light include all its possible states in the simulator, such as \\u201cred,\\u201d \\u201cgreen,\\u201d \\u201cyellow,\\u201d and \\u201coff.\\u201d These proposed nodes serve as candidates for the edge construction step. Examples of this process can be found in Appendix A.1.2. Furthermore, the original manuscript has been updated in the 'Node Proposal' section of the Method to incorporate this clarification.\\n\\n> In line 155, is there an example that clarifies what the prior represents exactly?\\n\\nThe prior refers to external constraints or preferences that guide the node proposal process. Specifically, the prior can take the form of a word or a list of words derived either from (i) human preferences, (ii) or from the entities available within the simulation environment. For example, if \\\"police car\\\" is an entity in the simulation engine, the prior guides the LLM to generate candidate nodes related to this entity. The corresponding prompt might be, \\u201cKeywords for police car.\\u201d In response, the LLM could propose candidate nodes such as \\u201cpolice chase.\\u201d These candidates are then evaluated during the edge construction step to ensure they align with plausible causal relationships to the source node, e.g., \\u201cego vehicle to stop \\u2190 police chase.\\u201c\\n\\n> This paper demonstrates a plausible generation ability; however, it lacks analysis of the entire generated graph. Could you provide the complete graph generated by ReGen?\\n\\nWe appreciate the reviewer\\u2019s feedback regarding the need for more comprehensive documentation of our methodology. In response, we provide the complete graph generated by Regen in \\u201cCode Example 6\\u201d in Appendix A.2.\\n\\nMore detailed examples are provided in Appendix 1.2 for nodes, covering event, entity, and property nodes, and in Appendix 1.3 for edge proposals, including event-to-event, entity-to-event, and property-to-entity connections. Full output examples can also be found in Appendix 5. For code examples, refer to Appendix A1.1 for the asset database, A2 for the finite state machine, and A2.2 for the full scenario configuration example. Additionally, we have also added comprehensive prompt examples for LLMs in Appendix 5. Specifically, Appendix 5.1 contains prompts for node proposals, and Appendix 5.2 includes prompts for edge creation. \\n\\n> Is there an issue with the brackets in line 182? It seems there are multie understandings here. A more precise notation or explanation is needed.\\n\\nWe acknowledge that our explanation was unclear and have revised this section for clarity.\\n\\n> When expanding the event-to-event nodes, if the LLM generates events that the simulator cannot execute, how does the system handle this?\\n\\nOur method employs an LLM as a general classifier, inherently performing rejection sampling by evaluating the plausibility of candidate connections in the context of the source node, thereby ensuring that only simulatable events are generated. \\n- Detail: For event-to-event edges, this approach identifies causal relationships. For entity-to-event and property-to-event edges, it evaluates whether the entities (e.g., an ambulance) and their properties (e.g., a siren) are available in the simulation engine. These entities and properties are defined in the asset database (see Appendix 1.1). We have revised the \\u201cEvent Construction\\u201d section of the method to enhance clarity and provide a more detailed explanation. Additional examples are included in Appendix 1.3, and the accuracy of this classifier is presented in Table 4.\"}", "{\"title\": \"Pt 1\", \"comment\": \"We thank reviewer x8VP for their thoughtful feedback and for highlighting areas that could benefit from further explanation. Below, we provide additional experimental studies and discussions to address these concerns.\\n\\n> Do not see a clear strength in terms of novelty\\n\\nWe appreciate the reviewer\\u2019s feedback and recognize the need to better articulate the novelty of our approach.\\n- Previous methods [A1, A2] aim to increase the diversity of simulations by generating new reward functions, which in turn produce more varied behaviors. This process is computationally intensive and inherently constrained by the need to define distinct reward functions for every simulated scenario.\\n- Our method proposes reusing and adapting existing behaviors (e.g., a trajectory or reward function) to generate novel simulations. Our method achieves this by altering the environmental context in which these behaviors occur. This approach overcomes the bottleneck of prior methods and enables the creation of richer and more varied scenarios, as extensively demonstrated through experiments. The key insight is that while behaviors are inherently limited, the environments in which they occur are far more diverse. For example, the abrupt stopping of a self-driving car can apply to various contexts, such as encountering a red traffic light, responding to a pedestrian stepping into the road, or yielding to an approaching police car with its siren on. \\n\\n> No comparison with human-designed scenarios\\n\\nOne of the methods we compared against is DriveCoT, which employs a rule-based expert policy to generate ground truth labels for driving scenarios selected from the CARLA Leaderboard 2.0, based on the NHTSA crash typology. We consider these scenarios to be designed by human experts.\\nFirst, we evaluated the diversity of scenarios generated by our method compared to those from DriveCoT, with our method achieving a significantly higher diversity score (Table 1). Second, we assessed the performance of VLMs on scenarios from both DriveCoT and our method (Table 3).\\n\\n> Unclear metrics for scenario complexity beyond embedding diversity \\n\\nIn our study, we measure complexity by assessing how challenging it is for VLMs to respond accurately to a scenario (Table 3). By evaluating VLM performance in these scenarios, we aim to provide a practical metric for assessing scenario complexity beyond embedding diversity. For example, we observed that in lane-change scenarios such as \\\"avoiding debris,\\\" \\\"overtaking a slow vehicle,\\\" \\\"merging into an open lane,\\\" or \\\"swerving to avoid a wrong-way driver,\\\" off-the-shelf VLMs often defaulted to deceleration as the primary action. This behavior, consistent with prior findings [A3], indicates a struggle to reason through nuanced spatial and situational situations. In contrast, scenarios generated by methods like DriveCoT (which also uses the CARLA simulator) involve spawning objects directly in the ego-vehicle\\u2019s path, prompting straightforward deceleration responses. Here, the VLM's default bias aligns with expected behaviors, demonstrating a lower level of complexity. \\n\\n> The paper inadequately addresses fundamental simulation constraints, as many proposed scenarios can't be fully simulated due to engine limitations, but there is no systematic approach for dealing with these limitations or assessing their impact on practical utility.\\n\\nIn our approach, a scenario is represented as a graph, with nodes representing events, entities, and properties, and edges capturing transitions between them. To ensure plausibility, we employ an LLM as a classifier to evaluate the validity of each edge. We have revised the Methodology section to improve clarity. To address simulation constraints explicitly, we construct an asset database represented as a directed graph, which encodes the capabilities of the simulation engine (see Appendix 1.1).\\n\\n---\\n\\nA1. Zhang, Jiawei, et al. \\u201cChatScene: Knowledge-Enabled Safety-Critical Scenario Generation for Autonomous Vehicles.\\u201d ArXiv.org, 2024, arxiv.org/abs/2405.14062.\\n\\nA2. Wang, Yufei, et al. \\u201cRoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation.\\u201d ArXiv.org, 2 Nov. 2023, arxiv.org/abs/2311.01455. \\n\\nA3. S. Sreeram, T.-H. Wang, A. Maalouf, G. Rosman, S. Karaman, and D. Rus, \\u201cProbing Multimodal LLMs as World Models for Driving,\\u201d arXiv.org, 2024. https://arxiv.org/abs/2405.05956.\"}", "{\"title\": \"Pt 3\", \"comment\": \"> Could you elaborate on the realism of the generated scenarios? If the generated scenarios are not reasonable, off-the-shelf VLM understanding may perform poorly, as observed in Table 3.\\n\\n- We agree that realism plays a crucial role in ensuring the generated scenarios align with human expectations and enable better performance for off-the-shelf VLMs. However, we observed that VLMs frequently responded with deceleration as the default action. This behavior, consistent with prior findings [A1], indicates VLMs struggle to reason through nuanced spatial and situational situations. In contrast, scenarios generated by DriveCoT (which also uses the CARLA simulator) involve spawning objects directly in the ego-vehicle\\u2019s path, requiring a deceleration response. Here, the VLM's default bias aligns with the expected behavior. The observed failures highlight a limitation of off-the-shelf VLMs in reasoning about complex driving situations, rather than pointing to an inherent lack of realism in the scenario design. Importantly, this limitation cannot be attributed to insufficient input, as the VLM is provided with privileged information, including the location and speed of other cars.\\n\\n- We conducted a preliminary human study to evaluate the realism of the generated simulations. In this study, participants assess whether the scenarios are both realistic and consistent with their descriptions. We will include the analysis of the results from the study in a future revision.\\n\\n---\\n\\nA1. S. Sreeram, T.-H. Wang, A. Maalouf, G. Rosman, S. Karaman, and D. Rus, \\u201cProbing Multimodal LLMs as World Models for Driving,\\u201d arXiv.org, 2024. https://arxiv.org/abs/2405.05956.\"}", "{\"title\": \"Thank you for raising the score\", \"comment\": \"Dear reviewer HrM2,\\n\\nThank you very much for raising the score and for acknowledging our efforts during the rebuttal to address your concerns. Your feedback has been invaluable in guiding us to significantly enhance the quality of the paper. We will incorporate the revisions into the final version.\\n\\nIf you have any additional comments or suggestions, please feel free to let us know.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Thanks for your active and detailed response.\", \"comment\": \"Dear Authors,\\n\\nThank you for your detailed response to the reviewers' comments. I apologize for my delayed reply due to recent illness.\\n\\nAfter carefully reviewing your responses to all reviewers, I maintain significant concerns about the fundamental aspects of your work. While you propose using LLMs to validate the feasibility of generated environments, this approach raises several critical questions: 1. How can LLMs effectively validate robot-environment interactions without a in-depth understanding of robot embodiment? Specifically, how can they verify whether objects are reachable within the robot's configuration space? 2. What mechanisms ensure that the generated environments are complete and allow for successful task execution by the robot? These concerns point to a broader issue: the lack of a comprehensive validation protocal for the generated environments. Without validation across multiple dimensions (physical feasibility, task completeness, etc.), it becomes impossible to differentiate between failures caused by the robot model versus environmental constraints. Your current implementation, as shown in Figure 3, presents relatively simple manipulation environments that fall short when compared to the complexity and realism offered by existing benchmarks such as ManiSkill, Habitat, and AI2-THOR. These environments demonstrate the level of sophistication required for meaningful robot simulation research.\\n\\nGiven these unresolved fundamental issues, particularly regarding environment validation and complexity, I maintain my original rating.\"}", "{\"title\": \"Please let us know your feedback\", \"comment\": \"Dear Reviewer x8VP:\\n\\nWe sincerely appreciate your insightful comments and advice, which have been instrumental in improving the quality and clarity of our paper.\\n\\nWe hope that the revisions, along with the additional details and experimental results we provided, have sufficiently addressed your concerns. As the rebuttal period nears its conclusion, could you kindly let us know if there are any remaining questions or concerns?\\n\\nBest,\\n\\nAuthors\"}", "{\"comment\": \"> In Q1 ... \\\"each simulated environment includes a policy for the ego agent to solve and this takes into account task completeness, physical feasability, and object reachability.\\\" ...\\n\\nHow did you obtain the policy? Replicate the agent behavior?\\n\\n> In Q2 ... \\\"Our setup is slightly different in that we adopt an inverse design approach\\\" ...\\n\\nYour answer does not address this problem. The thing is that the problem raise from such inverse design, so how the inverse design could resolve the problem.\\n\\n> ... \\\"our method complements existing methods like ManiSkill, Habitat, and AI2-THOR by enabling the reuse of learned policies in diverse simulated environments.\\\" ...\\n\\nTable 1-2 does not demonstrate the proposed method could complements existing benchmarks. More expriment should be conducted to show evidances that the proposed method how handle complex scenes (e.g., a layout with several rooms, not an \\\"table-top\\\" scene). The complexity in domestic scene is quite different from complexity in automonous driving scenes, the latter is not senstive to arrangements of objects while you altering the environment.\"}", "{\"title\": \"Rebuttal Period Closing Soon \\u2013 Could you let us know your feedback\", \"comment\": \"Dear Reviewer x8VP,\\n\\nWe sincerely appreciate your valuable feedback, which has greatly helped us enhance the quality and clarity of our paper.\\n\\nWe are encouraged that our rebuttal has effectively addressed the concerns raised by the other reviewers, and we hope it has similarly clarified any questions you may have. As the rebuttal period draws to a close, we kindly request your thoughts on our rebuttal and ask that you consider raising your score if you feel your concerns have been resolved. If there are any remaining issues, please do not hesitate to let us know, and we will do our best to address them promptly.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"This paper introduces ReGen, a generative simulation framework that automates the process of constructing simulations using inverse design. Given an agent\\u2019s behavior (such as a motion trajectory or objective function) and its textual description, ReGen infers the underlying scenarios and environments that could have caused the behavior. This approach leverages large language models (LLMs) to construct and expand a graph that captures cause-and-effect relationships and relevant entities with properties in the environment, which is then processed to configure a robot simulation environment. ReGen is demonstrated in autonomous driving and robot manipulation tasks, generating more diverse and complex simulated environments compared to existing simulations with high success rates, and enabling controllable generation for corner cases.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. ReGen presents an inverse design approach for generative simulation, which allows for the creation of diverse and complex simulated environments based on agent behavior and textual descriptions.\\n2. ReGen generates more diverse and complex simulated environments compared to existing simulations, as demonstrated in autonomous driving and robot manipulation tasks.\\n3. ReGen enables controllable generation for corner cases, which is important for safety-critical applications like autonomous driving.\", \"weaknesses\": \"1. The evaluation of ReGen is primarily focused on diversity and complexity, with less emphasis on the realism and physical accuracy of the generated simulations.\\n2. ReGen heavily relies on LLMs, which can be computationally expensive and may not always generate semantically accurate or physically plausible scenarios due to LLMs.\\n3. The applicability of ReGen to other robotics domains beyond autonomous driving and robot manipulation is not extensively explored.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"2 Days Until Rebuttal Deadline \\u2013 Please give us feedback\", \"comment\": \"Dear Reviewer q9yg,\\n\\nWe wanted to follow up on our earlier response to your feedback on our manuscript.\\n\\nAs previously mentioned, we have incorporated additional experiments and expanded the discussion in the revised manuscript, with all changes highlighted in red. With the rebuttal deadline in 2 days, we kindly ask if there are any remaining concerns or points you would like us to address. \\n\\nWe look forward to hearing from you. Thank you once again for your time and consideration.\\n\\nBest regards,\\nAuthors\"}", "{\"summary\": \"This paper introduces ReGen, a generative simulation framework that automates the creation of robot simulation environments using inverse design. ReGen takes a robot's behavior and textual descriptions as input, then uses large language models to construct and expand causal graphs that capture relationships between events, entities, and their properties, which are then converted into executable simulation environments. The framework is implemented and validated in both autonomous driving and robot manipulation tasks, demonstrating capabilities in augmenting simulations based on ego-agent behaviors, generating counterfactual scenarios, reasoning about agent cognition, and handling different sensing modalities. The experimental results show that ReGen generates more diverse environments compared to existing simulations, effectively creates corner cases for safety-critical applications, and produces more challenging vision-language-action datasets for vision language models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"While I do not see a clear strength in terms of novelty. However, the authors provide a comprehensive evaluation against multiple baselines, and thorough implementation across autonomous driving and manipulation tasks to demonstrate the applicability of the proposed method.\", \"weaknesses\": \"The main contribution of this paper is vague. The use of LLMs to generate simulated task environments for robotic tasks cannot be considered as a major contribution. The proposed method of using LLMs for graph expansion and simulation generation appears to be a straightforward extension of existing work such as [a, b]. I do not see a convincing argument that states the technical contributions of the proposed approach other than some prompt engineering. The \\\"inverse design\\\" is essentially a rebranding of standard goal-conditional generation approaches, where the goal is guided by the inverse design and feed into the LLMs.\\n\\nThe evaluation of the proposed method is not thorough enough, with only brief mentions of success rates without analysis of failure modes, no comparison with human-designed scenarios, and unclear metrics for scenario complexity beyond embedding diversity. Furthermore, the paper inadequately addresses fundamental simulation constraints, as many proposed scenarios can't be fully simulated due to engine limitations, but there is no systematic approach for dealing with these limitations or assessing their impact on practical utility. \\n\\n[a] Wang, Yufei, et al. \\\"Robogen: Towards unleashing infinite data for automated robot learning via generative simulation.\\\" arXiv preprint arXiv:2311.01455 (2023). \\n[b] Zhang, Jiawei, Chejian Xu, and Bo Li. \\\"ChatScene: Knowledge-based safety-critical scenario generation for autonomous vehicles.\\\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n------\", \"additional_comments\": \"Given that direct responses are no longer possible on the original [thread](https://openreview.net/forum?id=EbCUbPZjM1&noteId=yeUwITDALH), I am sharing my reply here.\\n\\nTraining policies in complex environments demands significantly greater computational resources and effort (e.g., see [c]). The fact that your policy was obtained in just 2 minutes suggests the task and environment were overly simplified comparing to actual scenarios. I have concerns about whether your proposed method would remain effective when scaled to more complex tasks and environments.\\n\\nRegarding your comments about the primary contribution, I remain unconvinced. The work does not demonstrate any novel or innovative applications of LLMs. Therefore, I will keep my original rating.\\n\\n[c] Gu, Jiayuan, et al. \\\"Multi-skill mobile manipulation for object rearrangement.\\\" arXiv preprint arXiv:2209.02778 (2022).\", \"questions\": \"1. Regarding the LLM implementation: Could you provide more details about the specific prompting strategies used for graph expansion? How do you ensure consistency and reliability in the LLM output, and how do you handle cases where the LLM generates invalid or inconsistent relationships?\\n2. Regarding the failures: The paper reports success rates of 80% for driving and 78% for manipulation. Could you provide a detailed analysis of the failure modes? \\n3. About simulation constraints: How do you systematically identify and handle cases where desired scenarios exceed the capabilities of the simulation engine? Is there a formal process for determining which aspects of a scenario can be reasonably approximated and which must be excluded?\\n4. About generated task completeness. How do you guarantee that the generated tasks are feasible for a robot or autonomous agent? i.e., that there is a practical solution to the generated tasks.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Could you please give us feedback\", \"comment\": \"Dear Reviewer hjAh,\\n\\nWe greatly appreciate your thoughtful feedback, which has been invaluable in helping us improve the quality and clarity of our paper.\\n\\nWe are encouraged that our rebuttal has successfully addressed the concerns raised by the other reviewers, and we hope it has similarly clarified any questions or concerns you may have. As the rebuttal period comes to an end, we kindly ask for your thoughts on our responses and invite you to consider revisiting your evaluation. If you feel your concerns have been resolved, we would be grateful if you could raise your score accordingly. If there are any remaining points that need clarification, please do not hesitate to let us know, and we will gladly address them promptly.\\n\\nThank you again for your time.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Thank you for your response.\\n\\nTraining a policy usually is time consuming. What is the actual time complexity for validating each newly generated environment (e.g., validation time in minutes or hours per environment)?\\n\\nFurthermore, the method's capability to handle complex domestic environments remains questionable. Given that the primary contribution appears to be application-focused rather than theoretical, more thorough experimental evidence is expected to demonstrate that the proposed method indeed significantly improves upon prior arts. However, substantial improvements typically stem from profound insights, which are absent in this paper.\\n\\nAs it stands, the work risks being categorized as primarily LLM prompt engineering - an increasingly common approach that, while useful, may not meet the threshold for scientific/theoretical contribution in this field.\"}", "{\"summary\": \"This paper presents ReGen, which, given an existing agent's behavior and motion trajectory, generates simulated environments counterfacturally that explains the possible causes and preconditions of this agent trajectory via LLM and VLM. In this way, authors are able to augment simulation data that allows the training of more robust models and policies. The authors then compare their approach to previous baselines in self driving (Carla simulator) and in robot manipulation (PyBullet), demonstrating that their approach effectively generates more diverse simulation environments.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Diversifying and augmenting existing simulated environments and trajectories through counterfactural generation is an important research problem. It provides a promosing way to improve the robustness and safety of agents and policies under corner cases and unexpected scenarios.\", \"weaknesses\": [\"The paper suffers from significant weaknesses in writing clarity and the depth of analysis, specifically:\", \"The methodology section 2 does not detail the models, the model versions, and the important hyperparameters used for prompting LLM / VLM for counterfactual generation. Some of the details are deferred to page 9, which should have been in much earlier in the paper.\", \"No prompt examples for LLM / VLM are provided in the appendix, making the method unreproducible. No concrete, detailed examples for the entire process of constructing Carla and PyBullet environments through counterfactual generation (following algorithm 1) are provided.\", \"Algorithm 1 will very likely result in infinite loop, as there will likely always be at least 1 node with input degree < 1 (unless there are circular cause and effect dependencies).\", \"L320 - ReGen is able to generate simulated environments by augmenting visual observations like GPS measurement jamming. However there is no detailed description of how the visual observations are augmented. It seems that some tool use ability of LLM is invoked.\", \"The experiments only demonstrate that ReGen outperforms baselines, but **why** ReGen outperforms the baseline is unclear. There are no ablations in e.g., prompting to answer this question. For example, why does ReGen outperform ChatScene? Is it due to better prompting and / or the better VLM used (note that no prompts are provided throughout the paper)? Is the comparison with baseline fair, with the same specific versions of GPT4o / GPT4 / Claude 3.5?\"], \"edit\": \"Thanks authors for the rebuttal! The revision has significantly improved the quality of the paper and I'm increasing my rating.\", \"questions\": \"Please address all weaknesses listed above.\", \"minor\": [\"Fig 4 caption: \\\"desipte\\\" -> \\\"despite\\\"\", \"\\\"table {x}\\\" should be \\\"Table x\\\"; \\\"figure {x}\\\" should be \\\"Figure {x}\\\"\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank reviewer HrM2 for their thoughtful feedback and for highlighting areas that could benefit from further explanation. Below, we provide additional experimental studies and discussions to address these concerns.\\n\\n> Additional method details\\n\\nWe have clarified that all LLM queries in our method utilize the GPT-4o-2024-08-06 model with a temperature and top-p value of 0. Additionally, we have revised the Methodology section to provide this information at the end of the first paragraph in subsection 2.2.\\nMore detailed examples are provided in Appendix 1.2 for nodes, covering event, entity, and property nodes, and in Appendix 1.3 for edge proposals, including event-to-event, entity-to-event, and property-to-entity connections. Full output examples can also be found in Appendix 5. For code examples, refer to Appendix A1.1 for the asset database, A2 for the finite state machine and full scenario configuration example.\\n\\n> Prompt examples\\n\\nThank you for your feedback. We have added comprehensive prompt examples for LLMs in Appendix 5. Specifically, Appendix 5.1 contains prompts for node proposals, and Appendix 5.2 includes prompts for edge creation.\\n\\n> Algorithm 1 will very likely result in infinite loop, as there will likely always be at least 1 node with input degree < 1 (unless there are circular cause and effect dependencies). \\n\\nThe potential for an infinite loop arises because the main graph expansion continuously adds new cause-and-effect relationships. However, this process is controlled by user-defined stopping conditions (graph depth or maximum number of nodes). We have clarified this in the manuscript to improve clarity.\\n\\n> L320 - ReGen is able to generate simulated environments by augmenting visual observations like GPS measurement jamming. However there is no detailed description of how the visual observations are augmented. It seems that some tool use ability of LLM is invoked. \\n\\nIn the case of GNSS jamming, the entity is the GNSS sensor, and the property is noise. We leverage an LLM to generate a function that invokes add_gnss_noise from the CARLA API, simulating a \\\"GPS Jamming\\\" scenario. The FSM tracks state transitions, enabling us to identify the appropriate moments during the simulation to call this function. This entire process is fully automated. We have revised the original manuscript to clarify this process and and included examples in Appendix 2 for further illustration. \\n\\n> Why ReGen outperform the baselines\\n\\nOur method surpasses ChatScene by significantly diversifying the underlying causes for challenging scenarios. ChatScene primarily generates scenarios involving objects crossing in front of the ego-vehicle (e.g., a pedestrian crossing or a car merging), while our method introduces more complex variations, such as a group of pedestrians or a vehicle traveling in the opposite direction. These scenarios empirically were more challenging for the driving policies, such as requiring larger steering adjustments to avoid groups of pedestrians. We have updated the original manuscript to elaborate on this.\\n\\nChanging models, adjusting hyperparameters, or relying on prompt-engineering methods like ChatScene, which uses GPT-4, result in only marginal improvements in scenario diversity. This is because these methods are limited by the context specified in their prompts, restricting the scope of generated scenarios. In contrast, our method expands the range of possible scenarios by proposing new, potentially unrelated contexts, which are then validated for plausibility using an LLM as a classifier, resulting in significantly greater diversity (Table 1-2).\\n\\n- Detail: Our approach decouples node proposals from edge construction. A key perspective is that graph expansion systematically explores the LLM's knowledge space, uncovering and organizing latent information into a structured graph. The node proposal component in our method generates new context condition on priors. For example, the prior guides the LLM to generate candidate nodes related to an entity such as a \\u201cpolice car\\u201d. The corresponding prompt might be, \\u201cKeywords for police car.\\u201d In response, the LLM could propose candidate nodes such as \\u201cpolice chase.\\u201d Then, during edge construction, the LLM \\u2013 which acts as a general classifier \\u2013 evaluates which nodes are plausible within the context to ensure that the resulting scenarios are not only diverse but also simulatable. Together, the node proposal and edge construction steps enable us to explore all potential events and uncover corner-case scenarios.\"}", "{\"title\": \"Thank you for raising the score\", \"comment\": \"Dear Reviewer q9yg,\\n\\nThank you very much for your response and for increasing your rating. We appreciate your suggestion regarding human-robot interaction scenarios and would like to highlight some examples from our work that align with this direction. In the driving domain, we have simulated scenarios such as \\\"yielding to an emergency vehicle\\\" and \\\"picking up a passenger on the sidewalk,\\\" which involve human-robot interactions that prior methods [A1, A2, A3] were unable to address. Similarly, in the manipulation domain, we have simulated scenarios like \\\"retrieving food from a fridge for a guest,\\\" \\\"storing detergent out of children's reach,\\\" and \\\"letting a guest out the door.\\\" We look forward to further advancing these directions in future work.\\n\\nBest,\\n\\nAuthors\\n\\n---\\n\\nA1. T. Wang, E. Xie, R. Chu, Z. Li, and P. Luo, \\u201cDriveCoT: Integrating Chain-of-Thought Reasoning with End-to-End Driving,\\u201d arXiv.org, 2024. https://arxiv.org/abs/2403.16996\\n\\nA2. C. Sima et al., \\u201cDriveLM: Driving with Graph Visual Question Answering,\\u201d arXiv.org, Dec. 21, 2023. https://arxiv.org/abs/2312.14150\\n\\nA3. Zhang, Jiawei, et al. \\u201cChatScene: Knowledge-Enabled Safety-Critical Scenario Generation for Autonomous Vehicles.\\u201d ArXiv.org, 2024, arxiv.org/abs/2405.14062.\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thank the authors for replying to my questions. All my concerns are clear now clear. As for additional domains, I suggest the authors to add more human-robot interaction scenarios. I increased my rate to 6.\"}", "{\"title\": \"Final Hours for Rebuttal \\u2013 Your Feedback Would Be Greatly Appreciated\", \"comment\": \"Dear Reviewer x8VP,\\n\\nWe are pleased to note that our rebuttal has effectively addressed the concerns of all other reviewers, and we are hopeful it has also resolved your queries. If there are any outstanding points from your initial review, we would greatly appreciate it if you could let us know so that we can address them promptly. Otherwise, we kindly request you to consider revisiting your evaluation if you feel our response has adequately resolved your concerns.\\n\\nThank you for your time and thoughtful consideration.\\n\\nWarm regards,\\n\\nAuthors\"}", "{\"title\": \"Please let us know your feedback\", \"comment\": \"Dear reviewer q9yg,\\n\\nWe sincerely appreciate the time you dedicated to reviewing our paper and offering constructive suggestions.\\n\\nIn response to your feedback, we have incorporated additional experiments and expanded the discussion in the revised manuscript, with changes highlighted in red. As the rebuttal deadline approaches, could you kindly let us know if you have any remaining concerns?\\n\\nWe look forward to hearing from you.\\n\\nBest regards,\\n\\nAuthors.\"}", "{\"title\": \"Rebuttal Deadline in 2 Days -- Could you please let us know your feedback\", \"comment\": \"Dear Reviewer x8VP,\\n\\nWe wanted to follow up on our earlier response to your feedback on our manuscript.\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our work. As mentioned, we have made revisions and included additional details and experimental results to address your concerns. With only two days remaining in the rebuttal period, we kindly ask if there are any remaining questions or points you would like us to address. \\n\\nWe hope to hear from you soon. Thank you!\\n\\nBest regards,\\nAuthors\"}", "{\"title\": \"Rebuttal Period Closing Soon \\u2013 Final Suggestions?\", \"comment\": \"Dear Reviewer HrM2,\\n\\nWe wanted to follow up on our earlier note to thank you once again for raising the score and for acknowledging our efforts to address your concerns during the rebuttal process.\\n\\nWe would greatly appreciate it if you could let us know if you have any additional comments or suggestions, especially as the rebuttal period nears its conclusion.\\n\\nWe look forward to hearing from you.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"metareview\": \"This paper proposes a method that generates simulations based on trajectories and behaviors, i.e., ReGen infers the underlying scenarios and environments that could have caused the behavior. It is an interesting idea in addition to the existing generative simulation works. The initial version of this paper missed several core parts to analyze its core contribution. After rebuttal, most of the issues are resolved. However, the lack of rigorous evaluation makes this a very borderline paper. Considering the clarity of the presentation, tremendous improvement during rebuttal, and the contribution of the idea itself, I would recommend this paper for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The authors addressed most of the concerns. However, some concerns remain for one reviewer.\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"Dear Reviewer x8VP,\\n\\nThank you for your feedback, we appreciate you getting back to us.\\n\\n> 1. How can LLMs effectively validate robot-environment interactions without a in-depth understanding of robot embodiment? Specifically, how can they verify whether objects are reachable within the robot's configuration space? \\n\\nWe would like to clarify that, in our approach, each simulated environment includes a policy for the ego agent to solve and this takes into account task completeness, physical feasability, and object reachability. We use an LLM to propose high-level constraints -- for example, in the scenario \\\"yielding to an ambulance,\\\" the LLM propose a constraint that the ambulance should initially be behind the ego-vehicle before passing ahead -- we do not rely on LLMs to verify whether objects are reachable within the robot's configuration space. Our success rate for task completeness for driving is 80% and manipulation is 78%. We have also provided a detailed analysis of failure modes in Appendix A.4.2.\\n\\n> 2. What mechanisms ensure that the generated environments are complete and allow for successful task execution by the robot? These concerns point to a broader issue: the lack of a comprehensive validation protocal for the generated environments.\", \"our_setup_is_slightly_different_in_that_we_adopt_an_inverse_design_approach\": \"given an ego agent behavior, we generate simulated environments where such behaviors could plausibly occur. A key bottleneck in existing methods lies in obtaining effective policies. Our work demonstrates how reusing existing behaviors can enhance the diversity and realism of simulated environments.\\n\\n> Without validation across multiple dimensions (physical feasibility, task completeness, etc.), it becomes impossible to differentiate between failures caused by the robot model versus environmental constraints. Your current implementation, as shown in Figure 3, presents relatively simple manipulation environments that fall short when compared to the complexity and realism offered by existing benchmarks such as ManiSkill, Habitat, and AI2-THOR. These environments demonstrate the level of sophistication required for meaningful robot simulation research.\\n\\nWhile our Figure 3 depicts simpler manipulation scenarios, we also address complex and realistic environments in other domains. For example, in autonomous driving, our generated environments include challenging multi-agent interactions such as \\\"yielding to an emergency vehicle\\\" and \\\"picking up a passenger on the sidewalk,\\\" which prior methods [A1, A2, A3] did not address.\\n\\nFor manipulation, we generate complex tasks that involve multiple subtasks. For instance, \\\"storing detergent out of children's reach\\\" requires a sequence of actions: \\\"opening storage furniture,\\\" \\\"placing the detergent inside,\\\" and \\\"closing the furniture door.\\\" Additional examples are available on our website (https://sites.google.com/view/regen-simulation).\\n\\nFurthermore, our method complements existing methods like ManiSkill, Habitat, and AI2-THOR by enabling the reuse of learned policies in diverse simulated environments. This reusability enhances the diversity and applicability of these policies, allowing for the generation of a broader range of simulated environment (Table 1-2, Figure 5).\\n\\nThank you again for your time, and we hope you are feeling better and in good health.\\n\\nBest,\\n\\nAuthors\\n\\n---\\nA1. Zhang, Jiawei, et al. \\u201cChatScene: Knowledge-Enabled Safety-Critical Scenario Generation for Autonomous Vehicles.\\u201d ArXiv.org, 2024, arxiv.org/abs/2405.14062.\\n\\nA2. Wang, Yufei, et al. \\u201cRoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation.\\u201d ArXiv.org, 2 Nov. 2023, arxiv.org/abs/2311.01455.\\n\\nA3. S. Sreeram, T.-H. Wang, A. Maalouf, G. Rosman, S. Karaman, and D. Rus, \\u201cProbing Multimodal LLMs as World Models for Driving,\\u201d arXiv.org, 2024. https://arxiv.org/abs/2405.05956.\"}", "{\"title\": \"Could you please let us know your feedback\", \"comment\": \"Dear Reviewer hjAh,\\n\\nThank you once again for dedicating your time and effort to reviewing our paper. We have worked diligently to address all your concerns, including the revised manuscript and additional experiments to further strengthen our work. Could you kindly let us know if you have any further questions or concerns?\\n\\nMuch appreciated,\\n\\nBest regards,\\n\\nAuthors.\"}", "{\"title\": \"Pt 1\", \"comment\": \"We thank reviewer hjAh for their thoughtful feedback and for highlighting areas that could benefit from further explanation. Below, we provide additional experimental studies and discussions to address these concerns.\\n\\n> The causality reasoning and diversity aspects appear to be inherent properties of the LLMs rather than unique contributions of the framework\\n\\n- Diversity: Changing models, adjusting hyperparameters, or relying on prompt-engineering methods like ChatScene, which uses GPT-4, result in only marginal improvements in scenario diversity. This is because these methods are limited by the context specified in their prompts, restricting the scope of generated scenarios. In contrast, our method expands the range of possible scenarios by proposing new, potentially unrelated contexts, which are then validated for plausibility using an LLM as a classifier, resulting in significantly greater diversity (Table 1-2).\\n- Causal reasoning: Our framework uses the inherent causality reasoning capability of LLMs to ensure that generated scenarios are realistic. Here, we define a realistic scenario as one in which cause-and-effect relationships are logical\\u2014for example, an \\\"emergency vehicle behind the ego vehicle\\\" causing the \\\"ego vehicle to change lanes.\\\"\\n\\n> Generation capabilities seem heavily dependent on the underlying simulator's features\\n\\nWe appreciate the reviewer's observation regarding the dependence on the underlying simulator\\u2019s features. However, we view this as a strength of our method: its ability to achieve significantly greater utilization of the simulator\\u2019s features compared to prior approaches. For example, as demonstrated in Table 1 and Figure 5, our method achieved greater scenario diversity than ChatScene\\u2014which also generates scenarios within the same simulator.\\n\\nOur results highlight that our method not only adheres to the constraints imposed by the simulator but also leverages its features more effectively to produce a broader range of scenarios. Moreover, as shown in Table 2 and Figure 3, we illustrate that our approach is not confined to a single simulator or domain but is extendable across various simulation platforms.\\n\\n> There is no analysis of the generated graph, only the results are shown, making it unclear whether the main contributions come from the LLM rather than the proposed graph structure.\\n\\nFigure 7 presents an ablation study showing the impact of changing event-to-event nodes on scenario diversity. Specifically, the pairwise diversity distribution of generated scenarios is categorized by the proportion of scenarios with unique event-to-event edges: 100%, 80%, 40%, and 0%. A value of 100% indicates that all compared scenarios have completely distinct causes, while 0% means they share the same cause but differ in properties such as start location or behavior.\\n\\nNotably, the bimodal distribution observed in the results highlights two key effects: introducing new event-to-event nodes significantly enhances diversity by expanding the range of causal relationships, while modifications to property-to-entity nodes result in more nuanced variations. Together, these mechanisms offer greater control over the scope and granularity of the generated scenarios. This analysis has been incorporated into the manuscript in the second paragraph of Appendix A.3.\\n\\n> Additionally, there is no ablation study on the graph; for instance, it is unclear what would happen without edge construction.\\n\\nThank you for your feedback. We conducted an ablation study to evaluate the accuracy of edge creation during the graph expansion stage, with results presented in Table 4 and further detailed in the first paragraph of Appendix A.3. \\n- Detail: We assessed event-to-event edges by testing the LLM\\u2019s ability to distinguish between causal and non-causal variables. For entity-to-event edges, we evaluated whether the LLM accurately identified simulatable events and selected appropriate assets, consistently demonstrating high performance. Finally, for property-to-entity edges, we tested the LLM\\u2019s ability to select the most plausible location, which posed a greater challenge compared to simpler properties, such as determining whether a siren should be on, where it performed more reliably.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
Eaw1ZrsNUN
USDC: A Dataset of $\underline{U}$ser $\underline{S}$tance and $\underline{D}$ogmatism in Long $\underline{C}$onversations
[ "mounika marreddy", "SUBBA REDDY OOTA", "Venkata Charan Chinni", "Manish Gupta", "Lucie Flek" ]
Although prior studies have explored Stance and Dogmatism in user conversations, their datasets are constructed at the post level, treating each post as independent and randomly sampling posts from conversation threads. Thus, Stance and Dogmatism labels in these datasets cannot capture the user's opinion fluctuations expressed throughout the entire conversation context. However, identifying user's opinion fluctuations in long conversation threads on various topics can be extremely critical for enhanced personalization, market research, political campaigns, customer service, conflict resolution, targeted advertising, and content moderation. Hence, training language models to automate this task is critical. However, to train such models, gathering manual annotations has multiple challenges: 1) It is time-consuming and costly; 2) Conversation threads could be very long, increasing chances of noisy annotations; and 3) Interpreting instances where a user changes their opinion within a conversation is difficult because often such transitions are subtle and not expressed explicitly. Inspired by the recent success of large language models (LLMs) for complex natural language processing tasks, we leverage Mistral Large and GPT-4 to automate the human annotation process on the following two tasks while also providing reasoning: i) User Stance classification, which involves labeling a user's stance of a post in a conversation on a five-point scale; ii) User Dogmatism classification, which deals with labeling a user's overall opinion in the conversation on a four-point scale. The majority voting on zero-shot, one-shot, and few-shot annotations from these two LLMs on 764 multi-user Reddit conversations helps us curate the USDC dataset. USDC is then used to finetune and instruction-tune multiple deployable small language models for the 5-class stance and 4-class dogmatism classification tasks. Additionally, human annotations on 200 test conversations achieved inter-annotator agreement scores of 0.49 for stance and 0.50 for dogmatism, indicating a reasonable level of consistency between human and LLM annotations. We make the code and dataset publicly available [https://anonymous.4open.science/r/USDC-0F7F].
[ "large language models", "annotators", "user opinions", "stance", "dogmatism", "human-llm alignment", "open-source llms", "closed-source llms" ]
Reject
https://openreview.net/pdf?id=Eaw1ZrsNUN
https://openreview.net/forum?id=Eaw1ZrsNUN
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xqIOLPd3Eq", "w48QEszr5E", "vvEJC0OYa1", "uXB5RtPyDU", "uFX7VNF847", "tAVlHJM2iF", "rUhblmWN5x", "qvaHw1yvmD", "oBNAYI0oER", "nMkDx50pMV", "iAeWmazUic", "fOsVGwgBAB", "eBpvGUJ43x", "aL4zDnDqKm", "a2MMgfPyI3", "WEaIwmAiei", "Vfzyky1vMH", "RRWBIFeaNu", "NN0Y1bMLeJ", "Mz6oNYdSH1", "MGmgFp2Xv0", "K4UAG5V2cd", "FjqVTTZUOS", "DnHV3F9YZZ", "CSu7rhqF80", "AUFPYNXCDI", "8uIAqgZNeQ", "89dO36tzza", "3vg2DCihRu", "3YCQWSIgSY" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1730825204809, 1732134093929, 1732380636110, 1733205136142, 1737523666302, 1731963110434, 1732523443578, 1733212667139, 1732552220398, 1732622836278, 1731929072694, 1730670666934, 1733206317931, 1732523509939, 1731962431588, 1732743479671, 1732523319018, 1730620332179, 1732682313712, 1734688570226, 1732380394408, 1731964539230, 1733080424186, 1733207794491, 1732623330990, 1732378700538, 1731961909146, 1730188161397, 1732084967369, 1731929637577 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4859/Reviewer_WjM4" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Reviewer_hG57" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Reviewer_QAKS" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Reviewer_hG57" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Reviewer_QAKS" ], [ "ICLR.cc/2025/Conference/Submission4859/Reviewer_sf39" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Area_Chair_kF15" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ], [ "ICLR.cc/2025/Conference/Submission4859/Reviewer_hG57" ], [ "ICLR.cc/2025/Conference/Submission4859/Reviewer_WjM4" ], [ "ICLR.cc/2025/Conference/Submission4859/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces a dataset for detecting stance and dogmatism in each turn of a conversation.\\nThe dataset is based on Reddit conversations, where each turn is a reply within a thread.\\nTo label the stance and dogmatism of each speaker\\u2014a task with inherent subjectivity\\u2014the authors use multiple prompting schemes across different models, finalizing each label through majority voting.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The dataset provides valuable insights into stance and dogmatic expressions in Reddit conversations, contributing a unique resource for analyzing opinion and belief expression in online discourse.\", \"weaknesses\": \"1. Moderate Inter-Annotator Agreement: The inter-annotator agreement between human and LLM annotations could be improved.\\n\\n2. Fragmented Conversations: By selecting only the top two authors\\u2019 comments, the dataset lacks conversational continuity. The two selected authors\\u2019 comments are scattered across the thread, rather than forming a cohesive conversation.\", \"questions\": \"1. The observed inter-annotator agreement between LLM-human and human-human highlights the subjectivity of this task. If these labels are treated as ground truth in experiments, might this reduce the robustness of the results?\\n\\n2. Are there any qualitative examples demonstrating cases with high, moderate, and low inter-annotator agreement? These could provide insight into where the labeling approach is most and least effective.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer WjM4,\\n\\nWe appreciate the reviewer\\u2019s positive feedback and are confident that it has contributed to enhancing the quality of our paper.\\n\\nWe acknowledge the two points raised regarding the use of a weighted kappa metric and the asynchronous nature of the conversations. We will address these points promptly during this discussion period.\\n\\nRegards,\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer sf39,\\n\\nWe appreciate your feedback and effort you have invested in evaluating our work.\\n\\nIn response to your insightful comments, we have addressed the issues you highlighted. We believe these revisions significantly contribute to the clarity and completeness of the paper. We kindly request you to verify our response and consider updating your evaluation based on the revisions made.\\n\\nShould you have any further questions or suggestions, we are ready to provide additional information or clarification as needed.\\n\\nThanks for your help\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thank you for your responses. However, my concern regarding the majority voting with two models remains unresolved.I will maintain the current score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": [\"*We thank the reviewer for their strong positive, insightful and valuable comments and suggestions which are crucial for further strengthening our manuscript.*\", \"**Q1. What I am particularly concerned about is that you only used two LLMs for data annotation, which poses a risk of missing knowledge from other regions and fields, especially knowledge that other models might possess.**\", \"Thank you for this question.\", \"Before proceeding with LLM annotation using larger models, we first tested other versions of GPT, Mistral and LLaMA models, such as GPT-3.5 and Mistral-small and medium, LLaMA-2-70B. However, we found that these models failed to produce annotations in the desired format.\", \"GPT4 is known to perform better than most other models across various regions and fields. Hence, we believe that a combination of GPT4 and mistral-large was good enough given our limited budget.\", \"**Q2. I am also worried that in some cases, the system prompts may not be clear enough, leading to confusion in the large language models during annotation, such as inaccuracies in recognizing the author's stance.**\", \"Thank you for this question.\", \"The detailed system prompt for our LLM annotations is provided in Appendix C in the original paper.\", \"It clearly lists the objective of the task, definition of stance, definition of dogmatism, detailed task description. Next we also provide detailed definitions of each of the stance labels as well as each of the document dogmatism labels.\", \"Further we provide a detailed description of the different fields in the input data.\", \"Lastly we have also provided helpful instructions for effective annotation where rather than just requesting the model to assign the label, we also asked it to provide a concise justification.\", \"We therefore think that our system prompt is clear enough. Is there anything specific that you think we could have included?\", \"**Q3. Furthermore, the models face difficulties in identifying intermediate positions and ambiguous attitudes, which can easily lead to misunderstandings of user opinions. These issues will be even more pronounced in machine-annotated data, as you may not be able to provide sufficient information.**\", \"For obtaining annotations, we provided the full conversation. Thus, we are able to provide sufficient information to the models (as much as we could provide to human annotators).\", \"In the recency bias experiments (lines 477 to 485), we observe that providing full context to models is better than providing just the recent context i.e. prior context. Hence, we provided full context to get our final annotations.\", \"Carefully designed system prompt, Few-shot prompting, majority voting are different ways in which we have attempted to obtain good quality annotations from LLMs.\", \"Lastly, GPT4 and mistral have significantly improved abilities to understand and maintain context over long conversations.\", \"**Q4. Please use more experiments to convince me that the weaknesses not exist.**\", \"For weakness 1, we have already argued that GPT4 and mistral can handle most regions and fields. Also, we have limited budget.\", \"For weakness 2, we have already provided a clear system prompt.\", \"For weakness 3, we have already designed robust methods to ensure accurate labels.\", \"We now include qualitative examples demonstrating cases with high, moderate, and low inter-annotator agreement (IAA) for the Stance and Dogmatism tasks, as shown in Appendix Figs. R.1, R.2, R.3, R.4, R.5 and R.6. In cases of high agreement, all LLMs consistently assign the same stance label to a user comment. For moderate agreement, some LLMs assign one stance class while others assign a neighboring stance class. For low agreement, GPT-4 assigns consistent stance labels across its three settings, but Mistral Large outputs differ for each setting. We have added these examples to Appendix R in the revised manuscript to illustrate where the labeling approach is most, moderate and least effective.\"]}", "{\"comment\": \"We appreciate the reviewer's positive feedback and are confident that it has enhanced the paper's quality.\"}", "{\"comment\": \"Dear Reviewer sf39,\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our paper.\\n\\nSince there are only a few hours remaining for reviewers to post messages to authors, we kindly request you to verify our response and consider updating your evaluation score based on the revisions made.\\n\\nShould you have any further questions or suggestions, we are ready to provide additional information or clarification as needed.\\n\\nThanks for your help.\"}", "{\"comment\": \"*We thank the reviewer for their valuable comments and suggestions which are crucial for further strengthening our manuscript.*\\n\\n**Regarding Q2:**\\n\\nThanks for your observation.\\n\\n* Our conclusion in response to Q2, \\\"For Mistral-large, we also observe this when computing inter-annotator agreement (IAA) with humans -- one-shot model has higher IAA compared to the few-shot one,\\\" was based on the overall test data for both stance and dogmatism tasks, as presented in Fig. 20. Specifically, for the stance detection task, the IAA is 0.35 for Mistral-large in the one-shot setting and 0.34 in the few-shot setting. Similarly, for the dogmatism detection task, the IAA is 0.54 for the one-shot setting and 0.50 for the few-shot setting.\\n* However, as highlighted by the reviewer, Fig. 13 presents plots based on dividing the data into bins using timestamps. This experiment aimed to analyze the \\\"lost in the middle\\\" phenomenon. We agree with the reviewer that when analyzing the data in these bins, Mistral Large annotations in the few-shot setting achieve higher inter-annotator agreement with human annotations compared to the one-shot setting. This apparent inconsistency can be explained by Simpson's Paradox. While the overall test data indicates that \\u03ba(A,B)>\\u03ba(A,C), when examined within subsets (bins), the trend may reverse, showing \\u03ba(A,B)<\\u03ba(A,C). Simpson's Paradox occurs when trends present within individual groups disappear or reverse when these groups are combined.\\n* We also conducted an analysis using the weighted Cohen's kappa score, as suggested by Reviewer WjM4, and is more appropriate for our setting. This metric accounts for the ordinal nature of the labels by assigning greater importance to closer agreements (e.g., 4 vs. 3) while proportionally penalizing more distant disagreements (e.g., 4 vs. 1). As shown in Fig. 22, the weighted Cohen's kappa scores are 0.59 for Mistral Large in the one-shot setting and 0.53 in the few-shot setting for the dogmatism task.\\n\\n\\n**Could majority voting based on the three GPT-4 settings further enhance the inter-annotator agreement with human annotations?**\\n\\nThank you for this question. We would like to clarify the process of obtaining labels in our USDC dataset and the role of both GPT-4 and Mistral Large models in the majority voting setup.\\n\\n**Label Distribution Across Models in the Majority Vote:**\\n* For the stance detection task, 85% of the labels (7969 out of 9324) were obtained through majority voting involving both GPT-4 and Mistral models. The remaining 15% (1355 labels) were derived from GPT-4 in cases where there was a conflict.\\n* Similarly, for the dogmatism task, 88% of the labels (1349 out of 1528) were obtained via majority voting, while the remaining 12% (179 labels) were assigned based on GPT-4\\u2019s output in conflict scenarios.\\n\\n**Why Both Models Are Necessary:**\\n* The majority of the labels (more than 85% across tasks) were determined through collaboration between GPT-4 and Mistral, making both models integral to the labeling process.\\n* GPT-4 is only used as a tie-breaker in the minority of cases (15% for stance detection and 12% for dogmatism), ensuring consistency and reliability for ambiguous cases without over-reliance on a single model.\\n\\n**Efficiency and Completeness:**\\nThis hybrid approach allows us to leverage the strengths of both models efficiently. Mistral provides cost-effective annotations for the majority of the data, while GPT-4 ensures high-quality resolution for edge cases, making the labeling process both scalable and accurate.\\n\\nWe hope this explanation clarifies the necessity of both models in our dataset annotation process.\"}", "{\"comment\": \"Dear Reviewer hG57,\\n\\nWe appreciate your feedback and effort you have invested in evaluating our work.\\n\\nIn response to your insightful comments, we have addressed the issues you highlighted. We believe these revisions significantly contribute to the clarity and completeness of the paper. We kindly request you to verify our response and consider updating your evaluation score based on the revisions made.\\n\\nShould you have any further questions or suggestions, we are ready to provide additional information or clarification as needed.\\n\\nThanks for your help.\"}", "{\"comment\": [\"*We thank the reviewer for their valuable comments and suggestions which are crucial for further strengthening our manuscript.*\", \"**Q1. Moderate Inter-Annotator Agreement: The inter-annotator agreement between human and LLM annotations could be improved.**\", \"Thank you for this question.\", \"Typical social media based datasets especially those involving long text are difficult to label objectively. That said, an inter-annotator agreement 0.49 for stance and 0.50 for dogmatism is quite reasonable. Previous studies have reported similar agreement values *[Fast & Horvitz 2016] [Sakketou et al. 2022]*.\", \"Possible ways to improve IAA is to ensure that the annotation guidelines are as objective as possible. To ensure this, we already performed several iterations (both manually as well as using prompt optimization methods) and have carefully designed the annotation guidelines. We have used the same guidelines as part of the prompt. The prompts are already included in the Appendix.\", \"*[Fast and Horvitz 2016], \\\"Identifying dogmatism in social media: Signals and models.\\\" EMNLP (2016)*\", \"*[Sakketou et al. 2022], \\\"Investigating user radicalization: A novel dataset for identifying fine-grained temporal shifts in opinion.\\\" arXiv preprint arXiv:2204.10190 (2022). https://arxiv.org/pdf/2204.10190*\", \"**Q2. Fragmented Conversations: By selecting only the top two authors\\u2019 comments, the dataset lacks conversational continuity. The two selected authors\\u2019 comments are scattered across the thread, rather than forming a cohesive conversation.**\", \"Thank you for raising this important question.\", \"We would like to clarify that although we annotated stance and dogmatism labels for the top two authors, we included the full conversation thread with all user comments in our dataset. This ensures that there are no fragmented conversations, as the full context of every user comment is provided.\", \"The dataset captures the entire thread, allowing the LLM to analyze how a user changes their opinion based on other users' comments within the context of the target conversation. This setup enables us to study how a user\\u2019s opinion evolves in response to other users\\u2019 comments throughout the entire conversation.\", \"Our decision to focus on the two most active users stems from the observation that users with fewer comments often do not provide sufficient data to accurately assess their stance or dogmatism. Many users contribute only one or two comments, which is typically insufficient for determining their overall opinion or dogmatic nature.\", \"By prioritizing the two most active users\\u2014who contribute approximately 50% of the comments in each conversation\\u2014we ensure that our analysis captures meaningful opinion fluctuations and provides a robust evaluation of stance and dogmatism. Opinion fluctuations, as well as stance and dogmatism detection can be useful to build moderation tools, which will be most applicable for active users.\", \"While we acknowledge that including additional users could enhance the dataset\\u2019s completeness, we believe our current approach strikes a balance between computational feasibility, analytical depth and practical usability. We appreciate the reviewer\\u2019s insight and will clarify this aspect further in the revised manuscript.\", \"**Q3. Are there any qualitative examples demonstrating cases with high, moderate, and low inter-annotator agreement? These could provide insight into where the labeling approach is most and least effective.**\", \"Thank you for this question.\", \"Based on the reviewer\\u2019s suggestion, we now include qualitative examples demonstrating cases with high, moderate, and low inter-annotator agreement (IAA) for the Stance and Dogmatism tasks, as shown in Appendix Figs. R.1, R.2, R.3, R.4, R.5 and R.6.\", \"In cases of high agreement, all LLMs consistently assign the same stance label to a user comment.\", \"For moderate agreement, some LLMs assign one stance class while others assign a neighboring stance class.\", \"For low agreement, GPT-4 assigns consistent stance labels across its three settings, but Mistral Large outputs differ for each setting.\", \"We have added these examples to Appendix R in the revised manuscript to illustrate where the labeling approach is most, moderate and least effective.\"]}", "{\"summary\": \"It is very costly to have humans annotate Reddit threads with multiple posts to label the stances and dogmatism of multiple users.\\nPrevious work considers posts independently, however, that is not the nature of the interaction. \\n\\nThis paper looks at if LLMs can encapsulate the nuances to understand user opinions and whether their opinions shift through the conversation. \\n\\nLLMs are used to classify (1) user stances and (2) user dogmatism and express their reasoning for their classification. Every sample is annotated six times (two models x zero-shot, one-shot, and few-shot) and then the majority vote is taken. \\n\\nThe paper introduces the USDC dataset that includes 1528 dogmatism samples (user-level) and 9618 stance samples. It is able to capture contextual and opinion shifts. As such it can be used as an instruction-tuning dataset or evaluation benchmark. The authors also instruction-tune and fine-tune LLMs on the dataset.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-motivated and easy to read.\\n2. The approach is straightforward and makes sense. \\n3. The added qualitative analysis is very important and nice to read.\", \"weaknesses\": \"1. The majority voting conflict makes me wonder why Mistral is used at all if, in cases of conflict, the decision maker is GPT4 (which is quite a costly model)?\\n2. Majority voting labels are used as ground-truth. It would be good to add experiments on what would happen if we train on unaggregated labels, as subjectivity is important in such a task.\", \"questions\": \"Why is it the case that instruction-tuning is better for stance and fine-tuning better for dogmatism?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer hG57,\\n\\nWe appreciate your thoughtful feedback and acknowledge your concerns regarding the use of majority voting with two models.\\n\\n**How are conflicts resolved**\\n* When generating annotations using both GPT-4 and Mistral, it\\u2019s possible that the two models might provide different annotations for the same conversation. To ensure consistency and accuracy in the final dataset, we have established a clear process for resolving these conflicts:\\n\\n**Majority Voting:**\\n* What It Is: Majority voting is a method where, if multiple models or iterations are used, we look at all the annotations provided and choose the label that appears most frequently.\\n* How It Helps: This approach helps reduce the impact of any potential error or bias from a single model. By relying on the most common label across models, we increase the likelihood that the chosen annotation is accurate.\\n\\n**Handling Situations with No Clear Majority:**\\n* The Challenge: Sometimes, even with majority voting, the two models might provide different annotations, and neither label clearly dominates.\\n* Our Solution: In these cases, we use the annotation provided by GPT-4 labels as the deciding factor or \\\"gold standard.\\\"\\n* Why GPT-4 labels?: We chose to prioritize GPT-4 annotations because human annotations have better IAA agreement with GPT-4 labels. We have discussed label distribution across models in the majority vote in the previous responses.\\n\\nBy following these steps, we aim to resolve conflicts in a way that enhances the reliability and accuracy of our dataset. We understand the importance of addressing potential conflicts thoroughly and believe that this method provides a balanced and effective solution.\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Dear Authors,\\n\\nThank you for your responses to my earlier comments. I appreciate the detailed clarifications and revisions provided. Below are my follow-up comments:\\n\\nRegarding Question 2, based on the results in Figure 13 of the paper, it appears that Mistral Large annotations in the few-shot setting achieve higher inter-annotator agreement with human annotations compared to the one-shot setting. This observation seems inconsistent with the second point mentioned in your response. \\n\\nFurthermore, Figure 13 also shows that GPT-4 consistently achieves higher inter-annotator agreement with human annotations across zero-shot, one-shot, and few-shot settings than Mistral Large. In light of this, could majority voting based on the three GPT-4 settings further enhance the inter-annotator agreement with human annotations? Additionally, would models trained on the majority voting annotations derived from these GPT-4 settings achieve further enhanced performance on the test set (as shown in Table 1)?\\n\\nIf I have misunderstood any aspect of the results or analyses, I welcome your clarification and additional insights. Thank you again for response!\"}", "{\"comment\": \"*We thank the reviewer for their strong positive, insightful and valuable comments and suggestions which are crucial for further strengthening our manuscript.*\\n\\n**Q1. I recommend that the authors revise these statements to reflect the results more precisely.**\\n\\nThank you for pointing this out. We now revised the paragraph to better reflect the results as follows. \\n\\n* Correction of Majority Voting Performance for Dogmatism Task under SLM fine-tuning:\\n - Upon this clarification, for the Dogmatism Classification task, when using majority voting labels as ground truth, 2 out of 7 models in the fine-tuning setup achieve an F1-score above 50%. This distinction is now clearly reflected in the revised manuscript. \\n* Correction for line 407: \\u201cFor both tasks when finetuning, the majority voting labels as ground truth has a relatively high performance, scoring above 50\\\\% weighted F1-score across several (7/7 for stance and 2/7 for dogmatism) models.\\u201d\\n\\n**Q2. why do SLMs fine-tuned with one-shot annotations outperform those fine-tuned with corresponding few-shot annotations? Could the authors provide a hypothesis that might explain this counterintuitive result?**\\n\\nThank you for this question.\\n\\n* When fine-tuning with few-shot annotations, the model might overfit to the small number of examples provided. This overfitting can lead to poorer generalization to new data, whereas one-shot annotations might strike a better balance between learning and generalization. \\n* For Mistral-large, we also observe this when computing inter-annotator agreement (IAA) with humans -- one-shot model has higher IAA compared to few-shot one. \\n\\nWe believe that the large size of GPT4 helps it to avoid this problem.\\n\\n* Previous studies [Chen et al. 2023] have also shown that one-shot could be as good as or sometimes better compared to few-shot. \\n\\n[Chen et al. 2023]. \\\"How Many Demonstrations Do You Need for In-context Learning?.\\\" EMNLP Findings 2023.\"}", "{\"title\": \"Looking forward to receiving your feedback\", \"comment\": \"Dear Reviewer hG57,\\n\\nWe truly understand the large workload that comes with reviewing and deeply appreciate the effort and time you have dedicated to reviewing our paper. As we are at the last stage of PDF updation, we want to kindly follow up to ensure there is sufficient time to address any remaining concerns you might have. Your recognition is highly important to us, and we sincerely hope to address all your concerns.\\n\\nThank you once again for your efforts in reviewing our paper, and we look forward to receiving your feedback.\\n\\n\\nRegards,\\n\\nAuthors\"}", "{\"comment\": \"Thank you for addressing my concerns. As I was already positive about this paper, I would like to keep my score the same!\"}", "{\"summary\": \"In this paper, the authors introduce a new dataset, named USDC. It is as a dataset focusing on user stance and dogmatism in long conversations. Different from the previous methods based on labeling individual posts, this dataset can track changes in user opinions throughout the entire conversation, making it particularly useful for user personalization, market research, political campaigns, and content moderation. However, this paper lacks of more llms to be involved, only using Mistral Large and GPT-4, making the dataset has risk to be restricted by the knowledge in Mistral Large and GPT-4.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The article is well-written and easy for readers to understand.\\n\\n2. It contributes a novel dataset, which is a unique collection focusing on user stance and dogmatism in long conversations. \\n\\n3. Extensive experiments demonstrate that the annotations generated by LLMs are comparable to those generated by humans. \\n\\n4. Fine-tuning and instruction-tuning multiple small language models and proved the effectiveness.\", \"weaknesses\": \"1. What I am particularly concerned about is that you only used two LLMs for data annotation, which poses a risk of missing knowledge from other regions and fields, especially knowledge that other models might possess.\\n2. I am also worried that in some cases, the system prompts may not be clear enough, leading to confusion in the large language models during annotation, such as inaccuracies in recognizing the author's stance. \\n3. Furthermore, the models face difficulties in identifying intermediate positions and ambiguous attitudes, which can easily lead to misunderstandings of user opinions. These issues will be even more pronounced in machine-annotated data, as you may not be able to provide sufficient information.\", \"questions\": \"Please use more experiments to convince me that the weaknesses not exist.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer hG57,\\n\\nThank you for your feedback and for raising important ethical considerations regarding our paper. We would like to address them in detail.\\n\\n**Ethical Considerations:**\\n\\n* While Reddit posts and comments are publicly accessible (i.e., **the data is curated from public Reddit conversation threads**), and Reddit usernames are not real names, we want to clarify that we are not handling any personal demographic details of the users. **We only consider post IDs for mapping with users, ensuring that no user identity information is revealed in our research.** This approach helps to maintain user privacy while still allowing for meaningful analysis of the data. \\n\\nWe hope this clarifies our approach and addresses the ethical concerns raised. We kindly request you to verify our previous and current responses and consider updating your evaluation score based on the revisions made.\\n\\nShould you have any further questions or suggestions, we are ready to provide additional information or clarification as needed.\\n\\nRegards,\\n\\nAuthors\"}", "{\"metareview\": \"This work investigates Stance and Dogmatism in conversations, by using LLMs to automate the human annotation process on user stance classification and user dogmatism classification over 764 multi-user Reddit conversations. As a result, this work constructs the USDC dataset. Specifically, 200 test conversations are annotated by human.\", \"strengths\": \"1. The paper is well-written, clear, and well-motivated.\\n2. The paper constructs a new dataset for studying Stance and Dogmatism in conversations.\\n3. Experimental results validate the effectiveness of the proposed approach.\", \"weaknesses\": \"1. The reliability of the constructed dataset is questioned by all the reviewers, including the subjectivity (moderate inter-annotator agreement/majority vote as ground truth), unnatural conversations (fragmented discussion thread), only two LLMs for annotations, etc.\\n2. The experimental results lack in-depth analysis. Some are solved during the rebuttal, but some are still not satisfied by the reviewer.\\n3. There are only 200 human-annotated test conversations, which may not be sufficient for robust evaluation. \\n\\nOverall, as a resource paper, the quality and reliability of the contributed dataset should be the key. However, according to the comments from the reviewers, despite the good quality of the writing and motivation of the problem, there are still many concerns on the curated dataset itself. Therefore, it would be better to further enhance the quality validation and even the size and diversity of the created dataset.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer QAKS remains positive towards this work, but the concerns from Reviewer WjM4 and hG57 are not fully addressed.\"}", "{\"comment\": \"Dear Reviewer hG57,\\n\\nWe appreciate your feedback and effort you have invested in evaluating our work.\\n\\nIn response to your insightful comments, we have addressed the issues you highlighted. We believe these revisions significantly contribute to the clarity and completeness of the paper. We kindly request you to verify our response and consider updating your evaluation based on the revisions made.\\n\\nShould you have any further questions or suggestions, we are ready to provide additional information or clarification as needed.\\n\\nThanks for your help\"}", "{\"title\": \"Summary of our responses and revision\", \"comment\": \"*We are grateful to all reviewers for their strong positive feedback, time and their constructive suggestions, which will further strengthen the impact of our work.*\\n\\n**Summary of Reviewer Strengths:**\\n\\n1. The paper is well-written, clear, and effectively motivated **(QAKS, sf39)**.\\n2. This paper introduces a novel contribution through the USDC dataset, which focuses on user stance and dogmatism in multi-user conversations **(WjM4, QAKS, sf39, hG57)**.\\n3. The methodology is straightforward and includes qualitative analysis, which adds depth to the findings **(QAKS, sf39)**\\n4. Extensive experiments, including fine-tuning and instruction-tuning of multiple small language models, validate the effectiveness of the proposed approach **(QAKS, sf39, hG57)**.\\n5. The work leverages large language models (LLMs) for efficient and scalable annotation, demonstrating that LLM-generated annotations are comparable to human annotations **(hG57)**.\\n\\n**Additional changes to the draft during the rebuttal process**\\n\\nWe have updated the main manuscript and the appendix to address these following comments. The changes made in the manuscript are highlighted in blue color. The major additional changes are listed below.\\n\\n1. **Qualitative examples demonstrating cases with high, moderate, and low inter-annotator agreement** (Reviewer WjM4, sf39): We have included qualitative examples in Appendix R of the revised manuscript, demonstrating cases with high, moderate, and low inter-annotator agreement (IAA) for the Stance and Dogmatism tasks, as illustrated in Figs. R.1\\u2013R.6. These examples highlight where the labeling approach is most effective, moderately effective, and least effective, showcasing consistent stance labels in high-agreement cases and discrepancies, particularly with Mistral Large, in low-agreement cases.\\n\\n2. **Robustness analysis of Human-LLM Annotations** (Reviewer WjM4): We have included a heatmap in Appendix Q, Fig. 21, comparing human-annotated labels with majority voting labels from LLMs, highlighting class-specific agreement for the Stance and Dogmatism tasks. The significant mismatch in intermediate stance classes, particularly \\\"Somewhat Against\\\" in stance detection and \\\"Open to Dialogue\\\" in dogmatism, likely accounts for the moderate inter-annotator agreement (IAA) observed between human and LLM-generated labels.\\n\\n3. **Weighted Cohen's Kappa score: IAA between human labels and LLM-generated labels** (Reviewer WjM4): Appendix S, Fig 22 presents weighted Cohen\\u2019s Kappa across eight settings, and highlights that the weighted Cohen\\u2019s Kappa metric improves the IAA between human annotations and the majority voting approach to 0.55.\\n\\nWe hope these revisions will satisfactorily address the concerns raised by the reviewers and elevate the overall quality of our work.\"}", "{\"comment\": \"Dear Reviewer hG57,\\n\\nAs the author-reviewer discussion phase approaches its conclusion, we kindly request that you review our response and consider updating your evaluation score based on the revisions made.\\n\\nShould you have any further questions or suggestions, we are ready to provide additional information or clarification as needed.\\n\\nRegards,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer hG57,\\n\\nAs the author-reviewer discussion phase is set to conclude in the next 5 hours, we kindly invite you to review our response and consider revisiting your evaluation score in light of the additional information provided.\\n\\nShould you have any further questions or suggestions, we would be more than happy to offer any clarification or additional details to address your concerns.\\n\\nThank you for your time and thoughtful consideration.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer sf39,\\n\\nThank you for your valuable and constructive comments, which have significantly enhanced the quality of our manuscript.\\n\\nWe have carefully addressed the comments and questions you raised, making significant revisions to enhance the clarity and completeness of the paper. We kindly request you to verify our response and consider updating your evaluation score based on the revisions made.\\n\\nShould you have any further questions or suggestions, we are ready to provide additional information or clarification as needed.\\n\\nThanks for your help.\"}", "{\"comment\": \"Dear Reviewer WjM4,\\n\\nWe appreciate the reviewer\\u2019s positive feedback and are confident that it has contributed to enhancing the quality of our paper.\\n\\n* As per your suggestion, we used the weighted Cohen's Kappa metric to compute the inter-annotator agreement (IAA) between human labels and LLM-generated labels across six settings, as well as majority voting, for the dogmatism task. Appendix S, Figure 22 reports the IAA on the test dataset, presenting the weighted Cohen\\u2019s Kappa score across eight settings: two different models (2 models \\u00d7 3 settings), majority voting, and human annotations for the dogmatism task.\\n* This figure highlights that the weighted Cohen\\u2019s Kappa metric improves the IAA between human annotations and the majority voting approach to 0.55, compared to the earlier score of 0.5 using the standard Cohen\\u2019s Kappa metric. This indicates that the weighted Cohen\\u2019s Kappa score effectively penalizes more distant disagreements, potentially leading to an improved measure of partial agreement.\\n\\n**Asynchronous Nature of User Conversations:**\\n\\n* We agree with the reviewer that Reddit conversations can indeed occur in real-time, but they predominantly unfold asynchronously. While users can respond immediately, delays between comments are common as users engage at different times. To address this, **our dataset includes the entire conversation thread with messages from all users**, arranged in the order of posts by timestamp **(not just the utterances of the 2 users)**. This ensures that all comments, regardless of posting time, are considered, preserving the context and continuity necessary for accurately evaluating stance and dogmatism. Our methods are designed to account for temporal gaps and contextual shifts, enabling robust analysis despite the asynchronous nature of these interactions.\\n* Also, while synchronous mechanisms like Facebook chat or Team chat could also be useful in evaluating stance and dogmatism, we believe that asynchronous communication is typically more thought-through because the users take time to think, form an opinion, think about consequences and then respond. Hence, evaluating user stance or dogmatism from asynchronous communication threads (while retaining entire conversation context) may be more useful than evaluating using synchronous real-time chat messengers.\\n\\nWe hope these updates address your concerns and demonstrate our commitment to improving the manuscript. Thank you again for your insightful feedback.\"}", "{\"comment\": [\"*We thank the reviewer for their strong positive, insightful and valuable comments and suggestions which are crucial for further strengthening our manuscript.*\", \"**Q1. The majority voting conflict makes me wonder why Mistral is used at all if, in cases of conflict, the decision maker is GPT4 (which is quite a costly model)?**\", \"Thank you for this question. We would like to clarify the process of obtaining labels in our USDC dataset and the role of both GPT-4 and Mistral Large models in the majority voting setup.\", \"Label Distribution Across Models:\", \"For the stance detection task, 85% of the labels (7969 out of 9324) were obtained through majority voting involving both GPT-4 and Mistral models. The remaining 15% (1355 labels) were derived from GPT-4 in cases where there was a conflict.\", \"Similarly, for the dogmatism task, 88% of the labels (1349 out of 1528) were obtained via majority voting, while the remaining 12% (179 labels) were assigned based on GPT-4\\u2019s output in conflict scenarios.\", \"Why Both Models Are Necessary:\", \"The majority of the labels (more than 85% across tasks) were determined through collaboration between GPT-4 and Mistral, making both models integral to the labeling process.\", \"GPT-4 is only used as a tie-breaker in the minority of cases (15% for stance detection and 12% for dogmatism), ensuring consistency and reliability for ambiguous cases without over-reliance on a single model.\", \"Efficiency and Completeness:\", \"This hybrid approach allows us to leverage the strengths of both models efficiently. Mistral provides cost-effective annotations for the majority of the data, while GPT-4 ensures high-quality resolution for edge cases, making the labeling process both scalable and accurate.\", \"We hope this explanation clarifies the necessity of both models in our dataset annotation process.\", \"**Q2. Majority voting labels are used as ground-truth. It would be good to add experiments on what would happen if we train on unaggregated labels, as subjectivity is important in such a task.**\", \"Thank you for this question.\", \"We would like to clarify that our experiments are not limited to majority voting as ground truth. We have also performed fine-tuning and instruction tuning using zero-shot, one-shot, and few-shot labels from both GPT-4 and Mistral Large models as ground truth. This additional analysis provides a comprehensive evaluation of how different types of labels, including individual model labels, affect model performance.\", \"Analysis of Majority Voting vs. Individual Model Labels:\", \"The results (Table 1 in main paper) demonstrate that majority voting labels consistently result in better model performance compared to labels obtained from individual models (e.g., GPT-4 or Mistral alone).\", \"Majority voting helps mitigate noise and outlier annotations by capturing a more robust consensus, improving generalization and reducing variability during training.\", \"Unaggregated Labels Under Individual Model Performance:\", \"The analysis of unaggregated labels is equivalent to conducting experiments with individual model labels. The performance using GPT-4 or Mistral labels alone reflects the impact of unaggregated, model-specific subjectivity.\", \"These experiments highlight that while individual models capture useful task-specific features, aggregated majority voting labels lead to better alignment with the task\\u2019s ground truth.\", \"In summary, our study already incorporates an analysis of unaggregated labels via experiments on individual model labels.\", \"**Q3. Why is it the case that instruction-tuning is better for stance and fine-tuning better for dogmatism?**\", \"Thank you for this thoughtful question. Below, we clarify why fine-tuning appears to outperform instruction-tuning for the dogmatism detection task and provide additional analysis to support our findings.\", \"Since dogmatism detection is inherently more complex and varied than stance detection, the model might struggle to generalize from the instructional data\", \"We report the confusion matrix for dogmatism detection task in Fig. 9 in the Appendix.\", \"It shows significant misclassifications, especially for the ''Deeply Rooted'' and ''Flexible'' labels, with zero accuracy and F1-scores.\", \"On the other hand, the model performs moderately better for ''Firm but Open'' and ''Open to Dialogue'' classes with accuracies of 48.7\\\\% and 64.4\\\\%, respectively.\", \"The confusion matrix also indicates substantial confusion to distinguish between intermediate levels of dogmatism, such as ''Firm but Open'' and ''Open to Dialogue''.\", \"This analysis demonstrates that dogmatism detection demands robust handling of long-term dependencies and task-specific adaptations, which fine-tuning achieves more effectively than instruction-tuning.\"]}", "{\"summary\": \"Previous datasets on stance and dogmatism in user conversations are constructed at the post level. However, these datasets fail to capture fluctuations in users' opinions throughout entire conversational contexts. This paper introduces USDC, a dataset focused on user stance and dogmatism in long conversations. Inspired by the recent success of large language models (LLMs) in complex natural language processing tasks, the authors leverage Mistral Large and GPT-4 to annotate the training data for USDC. To ensure data quality, they manually labeled 200 test conversations for evaluation. They also conducted experiments to assess the performance of different models on the test set.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper introduces a novel dataset, USDC, which focuses on user stance and dogmatism in multi-user conversations.\\n2. It leverages large language models (LLMs) for efficient, scalable annotation of complex conversational data.\\n3. Extensive experiments are conducted using a variety of small language models.\", \"weaknesses\": \"In the experimental section of the paper, the authors often merely list the results without providing in-depth analysis of the underlying reasons. For more details, refer to the \\\"Questions\\\".\", \"questions\": \"Q1: Line 407 \\\"2) For both tasks, the majority voting labels as ground truth has a relatively high performance, scoring above 50% weighted F1-score across several models.\\\" This expression is not very accurate. As shown in Table 1, for the Dogmatism Classification task, only 2 out of 14 models achieved an F1-score above 50% when using majority voting labels as ground truth. In contrast, when GPT-4 FS labels were used as ground truth, 11 out of 14 models outperformed the corresponding F1-scores obtained with majority voting labels. To enhance clarity and accuracy, I recommend that the authors revise these statements to reflect the results more precisely.\", \"q2\": \"Line 410 \\\"4) For GPT-4 annotations, in most cases, SLMs finetuned with few-shot annotations outperform those trained with zero and one-shot annotations. For Mistral Large annotations, SLMs finetuned with one-shot annotations perform the best.\\\" For the Mistral Large annotations, why do SLMs fine-tuned with one-shot annotations outperform those fine-tuned with corresponding few-shot annotations? Could the authors provide a hypothesis that might explain this counterintuitive result?\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Author\", \"comment\": \"Dear Author,\\n\\nThank you for your clarification and for including inter-annotator annotation examples in the Appendix.\\nThis addition has helped me better understand the dataset.\\n\\nIt seems that the labels are on an ordinal scale (e.g., \\\"Deeply Rooted\\\" = 4, \\\"Firm but Open\\\" = 3, etc.). Given this, it would make more sense to evaluate inter-annotator agreement using a weighted kappa metric, which accounts for the ordinal nature of the labels. (i.e., assigns greater importance to closer agreements (e.g., 4 vs. 3) and penalizes more distant disagreements (e.g., 4 vs. 1) proportionally.)\\n\\nI still believe that the asynchronous nature of the conversation between the two parties may not be sufficient for properly evaluating the user's stance and dogmatism within a cohesive conversational flow. This limitation could impact the accuracy and consistency of the evaluation. It would be helpful if you could clarify this aspect in your paper, perhaps by addressing how the asynchronous setup accounts for or mitigates this issue. Providing a discussion or justification for this design choice would strengthen the validity of the approach and make it clearer to readers.\"}", "{\"comment\": [\"**Q4. The observed inter-annotator agreement between LLM-human and human-human highlights the subjectivity of this task. If these labels are treated as ground truth in experiments, might this reduce the robustness of the results?**\", \"Typical social media based datasets especially those involving long text are difficult to label objectively. That said, an inter-annotator agreement 0.49 for stance and 0.50 for dogmatism is quite reasonable. Previous studies have reported similar agreement values *[Fast & Horvitz 2016] [Sakketou et al. 2022]*.\", \"Possible ways to improve IAA is to ensure that the annotation guidelines are as objective as possible. To ensure this, we already performed several iterations (both manually as well as using prompt optimization methods) and have carefully designed the annotation guidelines. We have used the same guidelines as part of the prompt. The prompts are already included in the Appendix.\", \"Using this dataset, we train llama-based models. These models are known to be robust to training label noise.\", \"The IAA is around 0.5 but it only captures mismatches and not the degree of mismatch. In our case, the classes have a ranking order. The predictions from human vs LLM are close to each other in this order even when they mismatch.\", \"Appendix Q Fig. 21 presents a heatmap comparing human-annotated labels and majority voting labels from LLMs, illustrating the class-specific agreement for Stance and Dogmatism tasks. From Fig. 21 (left), we make the following observations:\", \"(i) The ''Stance Not Inferrable'' (SNI) and ''Strongly Against'' (SGA) classes exhibit high agreement between human annotations and LLM predictions, as indicated by the strong diagonal values for these categories. These classes achieve recall values of 0.97 and 0.65, respectively, when compared to human labels.\", \"(ii) ''Somewhat in Favor'' (SIF) and ''Somewhat Against'' (SOA) show substantial mismatches with human labels, leading to higher rates of false positives in LLM predictions.\", \"(iii) Notably, ''Somewhat Against'' (SOA) demonstrates the greatest level of disagreement (recall of 0.37), with frequent misclassification into neighboring categories such as ''Strongly Against'' (SGA) or ''Somewhat in Favor'' (SIF).\", \"For Dogmatism task, we make following observations from Fig. 21 (right):\", \"(i) The ''Firm but Open'' (FBO) and ''Open to Dialogue'' (OTD) classes exhibit relatively high agreement, with strong diagonal values in the confusion matrix. These classes show better alignment between human labels and LLM predictions compared to other dogmatism categories.\", \"(ii) The ''Deeply Rooted'' (DR) and ''Flexible'' (FX) classes have significantly fewer samples and exhibit frequent misclassifications. For instance, ''Deeply Rooted\\u2019\\u2019 (DR) is often misclassified as ''Firm but Open\\u2019\\u2019 (FBO), indicating challenges in detecting extreme levels of dogmatism.\", \"Overall, the significant mismatch for intermediate stance classes, particularly ''Somewhat Against\\u2019\\u2019 in the stance detection task and ''Open to Dialogue\\u2019\\u2019 in the dogmatism task, likely explains the moderate inter-annotator agreement (IAA) observed between human and LLM-generated labels.\"]}" ] }
EaiU4F5pwn
Physics-Informed Self-Guided Diffusion Model for High-Fidelity Simulations
[ "Ruoyan Li", "Zijie Huang", "Yizhou Sun", "Wei Wang" ]
Machine learning (ML) models are increasingly explored in fluid dynamics as a promising way to generate high-fidelity computational fluid dynamics data more efficiently. A common strategy is to use low-fidelity data as computational-efficient inputs, and employ ML techniques to reconstruct high-fidelity flow fields. However, existing work typically assumes that low-fidelity data is artificially downsampled from high-fidelity sources, which limits model performance. In real-world applications, low-fidelity data is generated directly by numerical solvers with a lower initial state resolution, resulting in large deviations from high-fidelity data. To address this gap, we propose PG-Diff, a novel diffusion model for reconstructing high-fidelity flow fields, where both low- and high-fidelity data are generated from numerical solvers. Our experiments reveal that state-of-the-art models struggle to recover fine-grained high-fidelity details when using solver-generated low-fidelity inputs, due to distribution shift. To overcome this challenge, we introduce an \textit{Importance Weight} strategy during training as self-guidance and a training-free \textit{Residual Correction} method during inference as physical inductive bias, guiding the diffusion model toward higher-quality reconstructions. Experiments on four 2D turbulent flow datasets demonstrate the effectiveness of our proposed method.
[ "Physics-informed Neural Networks", "Computational Fluid Dynamics" ]
Reject
https://openreview.net/pdf?id=EaiU4F5pwn
https://openreview.net/forum?id=EaiU4F5pwn
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vH5hNKtGeR", "uLNoIi0jR2", "sdQrSEZ6MH", "qa9m2yz6L3", "muGlQZDDNq", "m0mYRsVUfa", "jar15Yx3QZ", "dQ7u6OlNUM", "bK3WiiH2x2", "ZzeJKErvls", "ZGZFu70VS5", "X9RLjXlcwJ", "WBsQpNfbsY", "W0XMubHmvE", "VtjW20lcOM", "RejAy43ltn", "PmbhTbi1JH", "OHJUkwnfLb", "O1XLqpgeTK", "NbnOCTR9Ot", "NYWaK1Ds82", "KUZlEZaxRE", "IxFqUMfE6a", "Inc5Vvf4dZ", "HULvycLLsf", "HDO4sMmcz5", "D1KBSftUwc", "CZcrFbIL0I", "CHAyOmQYyP", "BMLj5WnrV9", "9FAmv9KjuD", "8p1ekmWaun", "8fD0ZpWnug", "46V9Rlh92D", "3wDQurUL9f", "24Yx3SfoSh", "15NYifVr12" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1732081002259, 1732932874851, 1732080058000, 1732078948667, 1732679687910, 1730643537288, 1732499517485, 1732675882068, 1732499304771, 1732637746708, 1732932814381, 1732079942562, 1732499367558, 1730436075104, 1732508341824, 1732078661696, 1730658581302, 1732679783503, 1732080408235, 1734498651354, 1732080966645, 1732674584023, 1733181507873, 1732079529768, 1732679464494, 1730417952678, 1732675976515, 1732499648274, 1733168400910, 1732077793627, 1731635515634, 1732612207084, 1737523629131, 1732499583955, 1731204457714, 1732499443232, 1730703483643 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Reviewer_ovnW" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Reviewer_ZSxU" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Reviewer_ZSxU" ], [ "ICLR.cc/2025/Conference/Submission4262/Area_Chair_8FEW" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Reviewer_2FBP" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Area_Chair_8FEW" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Reviewer_2FBP" ], [ "ICLR.cc/2025/Conference/Submission4262/Reviewer_Vjib" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Reviewer_qB2f" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Reviewer_qB2f" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Area_Chair_8FEW" ], [ "ICLR.cc/2025/Conference/Submission4262/Reviewer_ovnW" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Reviewer_EzG5" ], [ "ICLR.cc/2025/Conference/Submission4262/Authors" ], [ "ICLR.cc/2025/Conference/Submission4262/Reviewer_Vjib" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer EzG5 Continued\", \"comment\": \"### **[Bicubic Interpolation Runtime]**\\nThe original bicubic interpolation was tested on cpus. We have revised Table 8 to report the runtime of pytorch implemented bicubic interpolation to run on gpus. We would like to comment that since the predictive accuracy, physical consistency, and perceptual quality of bicubic interpolation reconstructed samples are significantly worse than PG-Diff and other baselines, our results that PG-Diff is better than the baselines still hold true.\\n\\n[1] Redefining Super-Resolution: Fine-mesh PDE predictions without classical simulations.\\\\\\n[2] Inexpensive high fidelity melt pool models in additive manufacturing using generative deep diffusion.\\\\\\n[3] A physics-informed diffusion model for high-fidelity flow field reconstruction.\"}", "{\"title\": \"Regarding Feedback from Reviewer\", \"comment\": \"Dear Reviewer Vjib,\\n\\nI hope this message finds you well. We recently submitted our rebuttal and would like to kindly request your feedback on our responses.\\n\\nWe understand that your schedule is demanding and greatly appreciate the time and effort you dedicate to the review process. Your insights are invaluable to us, and we are eager to address any further questions or concerns you may have.\\n\\nThank you for your attention to this matter. We look forward to your response.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer ZSxU Continued\", \"comment\": \"### **[Qualitatively Different Corrections]**\\nSince PG-Diff only requires high-fidelity data during training and does not learn specific low-fidelity and high-fidelity relationships, it is unable to correct frames that are qualitatively incorrect as a postprocessing method. We ensured that the low-fidelity data in current experiments are qualitatively correct. However, we show that when integrated within the solver, PG-Diff can address this challenge. We present the results in the Appendix of our modified paper.\\n\\nThe method follows \\u201csolver->super resolution->downsample->solver\\u201d We first use a numerical solver for one-step predictions at a coarse grid. Then, we apply PG-Diff for super resolution, and then downsample it to a coarse grid, which is used as input for next step simulation.\\n\\nWe would like to highlight that even though at some point in the numerical simulation, PG-Diff will not be able to recover the trajectory, a core benefit of PG-Diff and [1] is that it only requires high-fidelity data during training. When presented with a new type of low-fidelity data, for example 40x40, PG-Diff only needs a small amount of validation dataset to determine the optimal t_guide. Then the pre-trained model can be directly applied to reconstruct from the low-fidelity data.\\n\\n[1] A physics-informed diffusion model for high-fidelity flow field reconstruction.\"}", "{\"title\": \"Response to Reviewer 2FBP Continued\", \"comment\": \"### **[Qualitatively Different Corrections]**\\nSince PG-Diff only requires high-fidelity data during training and does not learn specific low-fidelity and high-fidelity relationships, it is unable to correct frames that are qualitatively incorrect as a postprocessing method. We ensured that the low-fidelity data in current experiments are qualitatively correct. However, we show that when integrated within the solver, PG-Diff can address this challenge. We present the results in the Appendix of our modified paper.\\n\\nThe method follows \\u201csolver->super resolution->downsample->solver\\u201d We first use a numerical solver for one-step predictions at a coarse grid. Then, we apply PG-Diff for super resolution, and then downsample it to a coarse grid, which is used as input for next step simulation.\\n\\nWe would like to highlight that even though at some point in the numerical simulation, PG-Diff will not be able to recover the trajectory, a core benefit of PG-Diff and [1] is that it only requires high-fidelity data during training. When presented with a new type of low-fidelity data, for example 40x40, PG-Diff only needs a small amount of validation dataset to determine the optimal t_guide. Then the pre-trained model can be directly applied to reconstruct from the low-fidelity data. \\n\\n### **[t_guide in inference]**\\nWe adopt the same t_guide for 4x upsampling and 8x upsampling as [1]. We clarified hyperparameters in Appendix C2 IMPLEMENTATION DETAILS. We also conducted informal experiments to verify that the t_guide suggested in [1] still works very well in our case.\\n\\n[1] A physics-informed diffusion model for high-fidelity flow field reconstruction. \\\\\\n[2] Machine learning accelerated computational fluid dynamics.\"}", "{\"comment\": \"Thank you for taking the time to review our rebuttal and for raising the score. We greatly appreciate your thoughtful feedback and support.\"}", "{\"summary\": \"This manuscript aims to solve the problem of super-resolution of physical fields. Specifically, they draw on the framework in [1]. The difference lies in 1) weighting the denoising loss over pixels using the wavelet transform and 2) performing physical loss gradient descent over clean predictions.\\n\\n[1] A physics-informed diffusion model for high-fidelity flow field reconstruction\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Authors present four turbulence datasets with different characteristics.\\n\\n2. I agree with one of the author's points in the introductory section. They note that some of the current work focuses on low-resolution data that is downsampled. This is not consistent with reality, and in fact, to apply these super-resolution models, such data should come from the low-resolution simulation of a PDE solver, which typically has lower fidelity than downsampled ones.\\n\\n2. The amount of experiments is rich enough, and all the graphs are clear.\", \"weaknesses\": \"1. I have some questions about the authors' contributions to modeling. First is the training phase. The authors use the wavelet transform to determine which pixels need to be weighted. As can be seen from the visualization in Figure 10, the weighting seems to be added only at high frequencies. This raises the question of whether a complex wavelet transform needs to be introduced. This is because we can simply call the Laplace operator to achieve this effect. Furthermore, from the quantitative results in Table 1, it seems that this measure does not contribute much to the results.\\n\\n2. In addition, another contribution of the authors is to perform gradient descent on clean samples at each step to reduce the residuals on the equations. The authors ignore many related diffusion model guided generation literature [1,2,3] that the authors do not cite and discuss here.\\n\\n3. The authors claim that their high-fidelity data is actually simulated on a grid of 256. It is important to realize that for such a large Reynolds number (1000+), this resolution is usually insufficient. shu et al. [4]'s data was simulated on a grid of 2048.\\n\\n[1] DIFFUSION POSTERIOR SAMPLING FOR GENERAL NOISY INVERSE PROBLEMS\\n\\n[2] Denoising Diffusion Models for Plug-and-Play Image Restoration\\n\\n[3] DiffusionPDE: Generative PDE-Solving Under Partial Observation\\n\\n[4] A physics-informed diffusion model for high-fidelity flow field reconstruction\", \"questions\": \"see Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-up the discussion\", \"comment\": \"Dear Reviewer ovnW,\\n\\nWe greatly appreciate your time and feedback on our work. We have carefully addressed your comments and clarified potential misunderstandings. Additionally, we also included new experimental results to corroborate our findings.\\n\\nWe kindly invite you to revisit our paper in light of these updates and clarifications. We would greatly appreciate it if you could consider whether our responses warrant a reevaluation of your rating.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Request for Follow-Up Feedback on Author Rebuttal\", \"comment\": \"Dear Reviewer EzG5,\\n\\nOur detailed rebuttal has been submitted, and we have thoroughly addressed all the points and suggestions you raised. We understand the significant workload involved in reviewing papers, but we kindly request your feedback on our responses to ensure that the discussions are as productive and comprehensive as possible. \\n\\n\\nAs the discussion phase is comping to an end, we have so far received feedbacks from many others, and would like to get your insights in refining the final version of our work. We believe there're some misunderstanding and make our claims in the above rebuttal. We sincerely hope you can provide us feedbacks.\"}", "{\"title\": \"Follow-up the discussion\", \"comment\": \"Dear Reviewer EzG5,\\n\\nWe greatly appreciate your time and feedback on our work. We have carefully addressed your comments and clarified potential misunderstandings. Additionally, we also included new experimental results to corroborate our findings.\\n\\nWe kindly invite you to revisit our paper in light of these updates and clarifications. We would greatly appreciate it if you could consider whether our responses warrant a reevaluation of your rating.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Reply to authors\", \"comment\": \"I would like to thank the authors for their efforts and clarification. Some of my concerns such as the quality of the data have been addressed. I adjusted my score from 5 to 6.\\n\\nHowever, I think some weaknesses remain. First, conditional diffusion + learning is a quite common technique for diffusion inverse problem application. Applying it to PDE problems with residual constraint is definitely an interesting direction though. Second, putting the solver in the loop is a good start, however there is not too much quantitative evidence in the paper showing that the model do correct the deviation of the trajectory caused by under-resolved mesh unlike previous works [1, 2]. Currently, the results presented qualitatively shows that diffusion model can make the results smoother and seemingly more coherent. \\n\\n[1] Um, Kiwon, et al. \\\"Solver-in-the-loop: Learning from differentiable physics to interact with iterative pde-solvers.\\\" Advances in Neural Information Processing Systems 33 (2020): 6111-6122.\\n\\n[2] Kochkov, Dmitrii, et al. \\\"Machine learning\\u2013accelerated computational fluid dynamics.\\\" Proceedings of the National Academy of Sciences 118.21 (2021): e2101784118.\"}", "{\"title\": \"Regarding Feedback from Reviewer\", \"comment\": \"Dear Reviewer EzG5,\\n\\nI hope this message finds you well. We recently submitted our rebuttal and would like to kindly request your feedback on our responses.\\n\\nWe understand that your schedule is demanding and greatly appreciate the time and effort you dedicate to the review process. Your insights are invaluable to us, and we are eager to address any further questions or concerns you may have.\\n\\nThank you for your attention to this matter. We look forward to your response.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer ZSxU\", \"comment\": \"We would like to thank the reviewer for the feedback. In what follows, we hope to address any concerns you might have.\\n\\n### **[Diffusion Model Guided Generation Literature]**\\nThank you for pointing out other related work in diffusion model guided generation. We would like to clarify our differences with these works.\\n\\n-[3, 4] apply guidance to the noised samples, and [6] incorporate physical guidance in the score function. Our guidance is applied to the **predicted cleaned sample**. After correcting the predicted cleaned sample, we use it to calculate the noised sample at the next timestep. [1] has proposed a similar approach to apply a single step of gradient descent to the noised sample at each backward step as a baseline model, but the performance is inferior to the conditional diffusion model, which implies that it is also inferior to PG-Diff.\\n\\n-The proposed solution in [5] to solve the data subproblem either uses an analytical solution or a single step of gradient descent (Equation 13). Since analytical solutions in our problem are unavailable, we leverage **multiple steps** of gradient descent with **momentum and adaptive scaling** at **selected backward diffusion steps**. We notice that a single step of gradient descent is insufficient to minimize the residual. Thus, we use Adam to consider momentum and adaptive scaling to perform multiple gradient descent steps. In addition, differing from [3,4,5], our guidance is applied at selected backward diffusion steps. Our experiments in 4.5 PHYSICAL GUIDANCE reveal that even though an excessive amount of corrections can minimize the residual, it compromises L2 loss. Thus, we apply two corrections at the start and two corrections at the end to achieve optimal balance between physical consistency(Residual) and predictive accuracy(L2 Loss).\\n\\nWe have revised our manuscript to cite and discuss these related work.\\n\\n### **[Residual of Reference DNS]**\\nThe residual of the reference DNS simulation is reported in the following table. For each frame, we calculate the residual to obtain a 256x256 matrix. Then, we calculate the average sum of squares as our evaluation metrics. While the residual reported in our Table 1 is larger than that in [1], [1] normalizes the average sum of squares of the residual using high-fidelity residual.\\n\\nWe believe that normalization with high-fidelity residuals could cause controversy. While [1] argues that the residual metrics is the smaller the better, one can argue that since the high-fidelity data is the ground truth, reconstructed samples with residual closer to that of the high-fidelity pair should be better. Since existing works on CFD super resolution claim that residual metrics is the smaller the better, we adopt unnormalized residuals in our paper.\\n\\nWe would like to point out that under the evaluation framework where residuals closer to high-fidelity pairs indicate better physical consistency, PG-Diff surpasses all baseline models in every task, since its reconstructed samples have residuals closest to that of high-fidelity pairs.\\n\\n| | **Kolmogorov** | **McWilliams** | **TGV** | **Decay Turbulence** |\\n|----------------|----------------|----------------|-----------|-----------------------|\\n| **Residual** | 44.13 | 6.64 | 5101.65 | 212.43 |\\n\\n### **[High-Fidelity Data Grid]**\\nWe apologize for the oversight when drafting the paper. The high-fidelity data generation follows [1] and uses the pseudo-spectral solver from [2]. The direct numerical simulation was performed on a 2048x2048 discretization grid and then we uniformly downsample the data to 256x256 as our high-fidelity data. We have updated our manuscript to correctly describe the high-fidelity data generation.\\n\\n### **[Adam in Algorithm 1]**\\nThe Adam gradient descent in Algorithm 1 involves multiple steps and uses momentum and adaptive scaling. Empirically, we found that Adam performed better than vanilla gradient descent. Adam is re-initialized after every DDIM step.\\n\\n[1] A physics-informed diffusion model for high-fidelity flow field reconstruction.\\\\\\n[2] Physics-informed neural operator for learning partial differential equations.\\\\\\n[3] Diffusion Posterior Sampling For General Noisy Inverse Problems.\\\\\\n[4] DiffusionPDE: Generative PDE-Solving Under Partial Observation.\\\\\\n[5] Denoising Diffusion Models for Plug-and-Play Image Restoration.\\\\\\n[6] On conditional diffusion models for PDE simulations.\"}", "{\"title\": \"Follow-up the discussion\", \"comment\": \"Dear Reviewer Vjib,\\n\\nWe greatly appreciate your time and feedback on our work. We have carefully addressed your comments and clarified potential misunderstandings. Additionally, we also included new experimental results to corroborate our findings.\\n\\nWe kindly invite you to revisit our paper in light of these updates and clarifications. We would greatly appreciate it if you could consider whether our responses warrant a reevaluation of your rating.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"summary\": \"The paper proposes a diffusion-based method for reconstructing high-fidelity physics simulation data given low-fidelity input. The paper also proposes to incorporate prior knowledge via the gradient of PDE residual and a new weighting scheme based on multi-resolution analysis (Wavelet transform) for diffusion loss. The numerical experiments on several 2D flow problem showcase that the proposed method has better L2 accuracy and lower residuals over other baseline methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Upsampling and reconstructing under-resolved physics is important for building hybrid solver and inverse problem in PDE applications.\\n\\n2. A new spatial weighting scheme based on Wavelet transformation which modulate the loss based on the spectrum of signal. The new scheme is technically sound and experiments show that it consistently improves model performance.\\n\\n3. The introduction to the proposed method is clear and easy-to-follow.\", \"weaknesses\": \"1. Applying diffusion model with guidance based on target constraint for the inverse problem is not an entirely new technique (for example, DPS: https://arxiv.org/pdf/2209.14687 and On conditional diffusion models for PDE simulations: https://arxiv.org/abs/2410.16415 has explored similar technique)\\n\\n2. The evaluation of model\\u2019s prediction in terms of physics coherence is relatively vague. First, all the reported residuals are quite large and there is no reference showing what is a reasonable scale. While the average error of different frequency components in the wavelet domain is shown, there is no information on the spectrum of different wavelength.\\n\\n3. The authors run the solver on a coarse grid to get the \\u201clow fidelity\\u201d data and then run the solver on fine grid to get \\u201chigh fidelity\\u201d data, instead of artificially downsampling the data. The low-fidelity simulation will deviate from the high-fidelity one as time evolves due to the under-resolved error. Yet looking at the figure comparing data trajectory of different fidelities, the general structures of the vortices are very similar across different fidelities (for example, Figure 8) in my opinion. I hope the authors can provide more clarification and analysis regarding the dataset, such as the spectrum of different simulation and perhaps a simple study of mesh convergence.\", \"questions\": [\"Following point 2, what is the residual of the reference DNS simulation?\", \"The authors state that the high fidelity data for 2D Kolmogorov flow is derived by running DNS on a 256 x 256 grid (feel free to correct me if I\\u2019m wrong). However, under a similar setting (Re=1000), prior works like Shu et al. [1] and Kochkov et al. [2] have used a much finer discretization, i.e. 2048 x 2048.\", \"(Minor) In algorithm 1, what is the rationale for using Adam instead of simpler gradient descent? In addition, is Adam re-initialized after very DDIM step?\", \"[1] Shu, D., Li, Z., & Farimani, A. B. (2023). A physics-informed diffusion model for high-fidelity flow field reconstruction. Journal of Computational Physics, 478, 111972.\", \"[2] Kochkov, Dmitrii, et al. \\\"Machine learning\\u2013accelerated computational fluid dynamics.\\\" Proceedings of the National Academy of Sciences 118.21 (2021): e2101784118.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reviewers' Response\", \"comment\": \"Dear Reviewers,\\n\\nAs the author-reviewer discussion period is approaching its end, I would strongly encourage you to read the authors' responses and acknowledge them, while also checking if your questions/concerns have been appropriately addressed.\\n\\nThis is a crucial step, as it ensures that both reviewers and authors are on the same page, and it also helps us to put your recommendation in perspective.\\n\\nThank you again for your time and expertise.\\n\\nBest,\\n\\nAC\"}", "{\"title\": \"Response to Reviewer 2FBP\", \"comment\": \"We would like to thank the reviewer for the feedback. In what follows, we hope to address any concerns you might have.\\n\\n### **[Literature Review]**\\nThank you for pointing out other works on high-fidelity fluid flow reconstruction. We have updated Section 2.1 AI for Computational Fluid Dynamics (CFD) to discuss these related works.\\n\\n### **[PSNR and SSIM Results]**\\nWe report the PSNR and SSIM results in the following tables. PG-Diff outperforms baseline models in almost every case.\\n\\n### Kolmogorov\\n\\n| Model | 4x Upsampling | | 8x Upsampling | |\\n|-----------------------|--------------------|---------------------|--------------------|---------------------|\\n| | **PSNR** | **SSIM** | **PSNR** | **SSIM** |\\n| **Bicubic** | 21.1257 | 0.4769 | 18.3063 | 0.2479 |\\n| **CNN** | 24.8310 | 0.5190 | 22.9806 | 0.3712 |\\n| **GAN** | 20.6210 | 0.4160 | 20.2323 | 0.3695 |\\n| **Diffusion** | 25.4049 | 0.6487 | 21.5818 | 0.4072 |\\n| **Conditional Diffusion** | 25.2389 | 0.6456 | 20.2067 | 0.3425 |\\n| **PG-Diff** | **26.1733** | **0.6781** | **24.0754** | **0.4409** |\\n\\n### McWilliams\\n\\n| Model | 4x Upsampling | | 8x Upsampling | |\\n|-----------------------|--------------------|---------------------|--------------------|---------------------|\\n| | **PSNR** | **SSIM** | **PSNR** | **SSIM** |\\n| **Bicubic** | 25.1992 | 0.4834 | 21.9313 | 0.2519 |\\n| **CNN** | 28.6248 | 0.5897 | 27.4487 | **0.5164** |\\n| **GAN** | 28.6720 | 0.4621 | 27.7062 | 0.4477 |\\n| **Diffusion** | 29.71018 | 0.6686 | 25.2200 | 0.3884 |\\n| **Conditional Diffusion** | 29.6757 | 0.6665 | 25.1475 | 0.3823 |\\n| **PG-Diff** | **30.0540** | **0.6722** | **28.1972** | 0.4643 |\\n\\n### Taylor Green Vortex\\n\\n| Model | 4x Upsampling | | 8x Upsampling | |\\n|-----------------------|--------------------|---------------------|--------------------|---------------------|\\n| | **PSNR** | **SSIM** | **PSNR** | **SSIM** |\\n| **Bicubic** | 25.4811 | 0.7211 | 19.4033 | 0.5041 |\\n| **CNN** | 26.6066 | 0.7657 | 26.3448 | 0.7422 |\\n| **GAN** | 28.1370 | 0.7597 | 25.0551 | **0.7633** |\\n| **Diffusion** | 30.7706 | 0.8438 | 24.8698 | 0.6678 |\\n| **Conditional Diffusion** | 23.2154 | 0.6767 | 21.3979 | 0.5959 |\\n| **PG-Diff** | **31.5277** | **0.8713** | **26.7503** | 0.7562 |\\n\\n### Decay Turbulence\\n\\n| Model | 4x Upsampling | | 8x Upsampling | |\\n|-----------------------|--------------------|---------------------|--------------------|---------------------|\\n| | **PSNR** | **SSIM** | **PSNR** | **SSIM** |\\n| **Bicubic** | 41.7820 | 0.8888 | 35.1032 | 0.7098 |\\n| **CNN** | 41.0336 | 0.8849 | 42.1580 | 0.9200 |\\n| **GAN** | 39.2718 | 0.9055 | 38.9931 | 0.8863 |\\n| **Diffusion** | 47.2289 | 0.9571 | 40.8901 | 0.8622 |\\n| **Conditional Diffusion** | 46.0675 | 0.9138 | 40.1089 | 0.8456 |\\n| **PG-Diff** | **48.0321** | **0.9601** | **43.0276** | **0.9295** |\"}", "{\"summary\": \"This paper proposes PG-Diff, a diffusion model for high-fidelity flow field reconstruction. They introduce importance weight strategy during training as self-guidance and a training-free residual connection method during inference as physical inductive bias, to overcome the distribution shift challenge in this task.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper found that current SOTA reconstruction models fail to generate high-quality outputs due to large distribution shifts between low- and high-fidelity data.\", \"To address this issue, they propose a diffusion model to reconstruct the high-quality outputs through the guidance of an Importance Weight strategy during training as self-guidance and\", \"a training-free Residual Correction method during inference as physical inductive bias.\"], \"weaknesses\": \"- Literature review is not comprehensive, some papers in solving high-fidelity fluid flow reconstruction are not properly cited: \\\\\\n[1] \\u201cFu, Cong, Jacob Helwig, and Shuiwang Ji. \\\"Semi-Supervised Learning for High-Fidelity Fluid Flow Reconstruction.\\\" Learning on Graphs Conference. PMLR, 2024.\\u201d \\\\\\n[2] \\u201cEsmaeilzadeh, Soheil, et al. \\\"Meshfreeflownet: A physics-constrained deep continuous space-time super-resolution framework.\\\" SC20: International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, 2020.\\u201d \\\\\\n[3] \\u201cRen, Pu, et al. \\\"PhySR: Physics-informed deep super-resolution for spatiotemporal data.\\\" Journal of Computational Physics 492 (2023): 112438.\\u201d \\\\\\n[4] \\u201cGao, Han, Luning Sun, and Jian-Xun Wang. \\\"Super-resolution and denoising of fluid flow using physics-informed convolutional neural networks without high-resolution labels.\\\" Physics of Fluids 33.7 (2021).\\u201d \\\\\\n\\n- For comprehensive evaluation, I recommend authors also add commonly used metrics from image super-resolution such as PSNR and SSIM. \\n- For Figure 4, the high-resolution and low-resolution simulations basically have very similar pattern, which doesn\\u2019t well demonstrate the assumption that coarse-grained simulation differ from high-resolution simulation. I recommend authors use the test equation in this paper (\\u201cMachine learning accelerated computational fluid dynamics\\u201d: https://arxiv.org/pdf/2102.01010), In Fig 2 of that paper, it\\u2019s clear at time step 1500 that fluid flow simulated from low-resolution input deviate a lot from high-resolution input, where simple super-resolution cannot be applied. I would like to see PG-Diff can reconstruct high-fidelity flow in this scenario.\", \"questions\": [\"In algorithm 1, during inference the input is not pure noise, then how to determine T_guide?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thoughtful consideration and for reviewing our rebuttal. We appreciate your careful evaluation and look forward to your final recommendation.\"}", "{\"title\": \"Response to Reviewer qB2f\", \"comment\": \"We would like to thank the reviewer for the feedback. In what follows, we hope to address any concerns you might have.\\n\\n### **[Importance Weight Weighting Function]**\\nEquation 6 linearly maps the high frequency signals to an importance weight between alpha and beta. The weight in [2] is adjusted as a result of adjusting noise schedules for higher resolution images. Our attention weight is sample dependent to ensure the model captures high-fidelity fine-grained details.\\n\\n### **[t_guide in Reverse Process]**\\nWe adopt the same t_guide for 4x upsampling and 8x upsampling as [1]. We clarified hyperparameters in Appendix C2 IMPLEMENTATION DETAILS. [1] conducted extensive studies to determine the optimal t_guide to ensure the correct amount of noise is injected to low-fidelity data. We also conducted informal experiments to verify that the t_guide suggested in [1] still works very well in our case.\\n\\n### **[Residual Calculation]**\\nThe calculation of residual is defined in Appendix D.1 RESIDUAL CALCULATION. We also referenced this section in the main paper.\\n\\n### **[Conditional Diffusion Baseline]**\\nThe conditional diffusion baseline follows [1]. At each backward diffusion step, we calculate the gradient of the residual with respect to the noised sample as the condition and concatenate it with the noised sample into our denoiser, a Unet, to predict noise. Our residual correction, on the other hand, utilizes the gradient of the residual with respect to **predicted cleaned samples** during backward diffusion. An intuitive interpretation for superior performance of our method is that the baseline conditional diffusion model projects the noised sample onto the solution subspace of the PDE during denoising, while our method directly projects the predicted cleaned sample onto the solution subspace of the governing PDE at **selected backward diffusion steps**. Placing two residual corrections at the beginning quickly projects the sample onto the solution subspace of the PDE to guide the backward diffusion process and two residual corrections at the end guarantees physical consistency of the reconstructed samples.\\n\\n### **[Qualitatively Different Corrections]**\\nSince PG-Diff only requires high-fidelity data during training and does not learn specific low-fidelity and high-fidelity relationships, it is unable to correct frames that are qualitatively incorrect as a postprocessing method. We ensured that the low-fidelity data in current experiments are qualitatively correct. However, we show that when integrated within the solver, PG-Diff can address this challenge. We present the results in the Appendix of our modified paper.\\n\\nThe method follows \\u201csolver->super resolution->downsample->solver\\u201d We first use a numerical solver for one-step predictions at a coarse grid. Then, we apply PG-Diff for super resolution, and then downsample it to a coarse grid, which is used as input for next step simulation.\\n\\nWe would like to highlight that even though at some point in the numerical simulation, PG-Diff will not be able to recover the trajectory, a core benefit of PG-Diff and [1] is that it only requires high-fidelity data during training. When presented with a new type of low-fidelity data, for example 40x40, PG-Diff only needs a small amount of validation dataset to determine the optimal t_guide. Then the pre-trained model can be directly applied to reconstruct from the low-fidelity data.\\n\\n[1] A physics-informed diffusion model for high-fidelity flow field reconstruction.\\\\\\n[2] Machine learning accelerated computational fluid dynamics.\"}", "{\"metareview\": \"This paper proposes PG-Diff, a diffusion model for super-resolution in CFD, that reconstructs high-fidelity flow fields from low-fidelity inputs. The key innovations include an \\\"Importance Weight\\\" strategy during training, prioritizing high-frequency components via wavelet transforms, and a training-free \\\"Residual Correction\\\" method during inference, incorporating physical constraints from governing equations like the Navier-Stokes equations. The authors evaluated PG-Diff on four CFD benchmarks, where it demonstrated improved accuracy and reduced residuals compared to existing super-resolution methods.\\n\\nThe reviewers generally agree that the paper is well-written and easy to follow, highlighting the use of wavelet transforms and physics-informed (residual) corrections to enhance the performance of the diffusion model in CFD benchmarks with respect to the baselines.\\n\\nHowever, many reviewers raised questions about the validity of the claims, particularly with respect to the issue of distribution shifts and the claims on speed-up with respect to numerical solvers. The reviewers also raised concerns about the positioning of the paper in the literature regarding the super-resolution of divergent solutions (also called downscaling), noting that the problem has already been studied and tackled before; particularly using diffusion models in [1,2,3]. Thus, greatly reducing the novelty of the approach, as many of those baselines are missing.\\n\\nGiven the weaknesses of the paper, I recommend rejection.\", \"references\": \"[1] Mardani, Morteza, et al. \\\"Residual corrective diffusion modeling for km-scale atmospheric downscaling, 2024.\\\" URL https://arxiv. org/abs/2309.15214.\\n\\n[2] Bischoff, Tobias, and Katherine Deck. \\\"Unpaired downscaling of fluid flows with diffusion bridges.\\\" Artificial Intelligence for the Earth Systems 3.2 (2024): e230039.\\n\\n[3 ]Wan, Zhong Yi, et al. \\\"Debias coarsely, sample conditionally: Statistical downscaling through optimal transport and probabilistic diffusion models.\\\" Advances in Neural Information Processing Systems 36 (2023): 47749-47763.\", \"additional_comments_on_reviewer_discussion\": \"The main issue raised by the reviewers was that the claims were not properly backed up by numerical experiments, and the authors were not familiar with an extensive literature solving the same problem (including methods leveraging diffusion models). The authors did not address these issues satisfactorily.\"}", "{\"title\": \"Response to Reviewer EzG5\", \"comment\": \"We would like to thank the reviewer for the feedback. In what follows, we hope to address any concerns you might have.\\n\\n### **[Introduction Statement]**\\nThank you for pointing out these related works. We have revised our manuscript to correctly cite and discuss these works. However, we would like to point out that even though [1, 2] benchmarks their super resolution models on solver generated low-fidelity data, they did not explicitly address the loss of fine-grained details in reconstructed samples under such a setting. [1] combines a Unet with physics-informed loss function, and [2] uses a CNN and standard diffusion models. Our paper, on the other hand, proposes novel Importance Weight and Residual Correction to enhance the predictive accuracy and physical consistency of the reconstructed samples.\\n\\nThe state-of-the-art diffusion model is unable to reconstruct fine-grained details because solver generated low-fidelity data contains less information on these fine-grained details compared to downsampled low-fidelity data. Thus, we propose Importance Weight to force the diffusion model to reconstruct fine-grained details at different noise levels during training.\\n\\n[3] only implies that adding noise would make downsampled low-fidelity data and noisy high fidelity data to approach the same distribution. However, with significant distribution shifts, for example, Kolmogorov Flow data with McWilliams Flow data, we would need to add a significant amount of noise to make these two distributions similar. At that point, both noised distributions would be completely random, and the diffusion model generates data unconditionally.\\n\\nWe have updated our manuscript to clearly express these thoughts.\\n\\n### **[Section 4.6 Model Generalization Experiment]**\\nOur super resolution model is aimed to combine with traditional numerical solvers to accelerate high-fidelity simulation. Thus, we aim to test whether PG-Diff, trained with data from one solver configuration, can be extended to data from different solver configurations. We do not claim that PG-Diff can generalize to distributions significantly different from training.\\n\\n### **[Runtime Comparison]**\\nOur runtime comparison is revised under the new setting. Results show that PG-Diff can still accelerate the generation of high-fidelity data.\\n\\n### **[LPIPS Score]**\\nWe believe that the perceptual aspects of LPIPS are not inherently tied to ImageNet\\u2019s specific domain (natural images) but arise from the hierarchical feature extraction of convolutional architectures. Low-level features such as edges and textures, and mid-level features such as structures and patterns, are domain-agnostic. In the case of CFD vorticity data, we observe analogous structures and patterns, such as coherent vortical regions and flow features.\\n\\nWe also add PSNR and SSIM results to demonstrate the superior performance of PG-Diff in the Appendix.\\n\\n### **[Multiscale Evaluation]**\\nIn Line 375, we claimed that \\u201cPG-Diff demonstrates superior performance in the LL, LH, and HL subdomains, achieving the best or near-best results among all methods\\u201d. We acknowledge that PG-Diff does not have best performance in HH subdomain, and we explicitly stated that. For LL, LH, and HL subdomains, our claim that PG-Diff has the best or near-best results among all methods does hold true.\\n\\n### **[Residual Correction]**\\nThe hyperparameter tuning of our residual correction is conducted on Kolmogorov Flow only, and the same hyperparameter is used for all other datasets. Our Residual Correction involves multiple steps of Adam gradient descent applied at selected backward diffusion steps. The gradient descent projects the reconstructed samples onto the solution subspace of the PDE by minimizing the residuals. However, in super resolution tasks, we not only want the reconstructed sample to lie on the solution subspace of the PDE to ensure physical consistency, we also want the difference between reconstructed sample and ground truth high-fidelity data to be minimized to ensure predictive accuracy. Thus, we conduct experiments in Section 4.5 PHYSICAL GUIDANCE to find the optimal scheduler and number of correction steps to ensure optimal balance between physical consistency(Residual) and predictive accuracy(L2 Loss).\\n\\n### **[Generalization Experiment Re=2000]**\\nDue to computational limitations, the models in generalization experiments are only trained for one time. The improvements likely come \\nfrom model trained on Re=2000 data being an outlier.\"}", "{\"comment\": \"Thanks author for the effort in rebuttal. My concerns are well addressed, so I've increased my score from 5 to 6.\"}", "{\"comment\": \"Thanks so much for your time and effort to address my previous concern. However, the current version still has some limitations in my mind:\\n\\n1) The authors mentioned that \\u201cTraditional approaches such as Direct Numerical Simulation (DNS) (Orszag, 1970) offer high-resolution solutions. However, they are computationally expensive,...\\u201d Based on this statement, it seems that the traditional methods could work well in terms of quality but it may incur additional computational costs. Thus, for experiments, the authors should: (1) include one or two traditional methods; (2) include the computational cost as another performance metric.\\n\\n2) I agreed with other reviewers about the statement of \\u201cdistribution shift\\u201d. \\u201cOur experiments reveal that state-of-the-art models struggle to recover fine-grained high-fidelity details given solver generated low-fidelity inputs, due to the large distribution shifts.\\u201d It is a little bit confusing here, do you mean the large distribution shifts between the \\u201csolver generated low-fidelity data\\u201d and the \\u201cdirectly downsampled low-fidelity data\\u201d? The authors should clarify this and include some evidence. Based on Fig1, it does not seem there are large distribution shifts, instead, it may be only one that includes a little bit more texture information compared to the other?\\n\\n3) Thanks the authors for adding the experiments when data contains Gaussian noise. However, adding Gaussian noise may not be sufficient to address my previous concern as the diffusion model may naturally handle Gaussian noise very well due to the \\u201cdiffusion process\\u201d while it may fail if other kinds of noises are injected. It would be better if the authors could simulate a \\u201creal-world\\u201d scenario with \\u201creal-world\\u201d noises.\"}", "{\"title\": \"Response to Reviewer ovnW\", \"comment\": \"We would like to thank the reviewer for the feedback. In what follows, we hope to address any concerns you might have.\\n\\n### **[Wavelet Transformation vs Laplace Operator]**\\nWhen designing the model, we compared the performance of wavelet transformation based importance weight and gradient based importance weight. Empirically, we found that wavelet transformation produced better results. We report the performance of PG-Diff with wavelet transformation based, gradient based, or laplacian based importance weight only(Residual Correction is not included) in the following table. The experiments are conducted on Kolmogorov Flow and tested on validation dataset. We hypothesize that the better performance of wavelet transformation based importance weight is due to its better assignment of $a_{i,j}$.\\n\\n| | 4x Upsampling | | 8x Upsampling | |\\n|----------------------|--------------------|---------------------|--------------------|---------------------|\\n| | **L2** | **Residual** | **L2** | **Residual** |\\n| **Wavelet** | 1.7363 | 526.62 | 2.9029 | 661.45 |\\n| **Gradient** | 1.7603 | 897.38 | 3.0168 | 935.46 |\\n| **Laplacian** | 1.7510 | 703.71 | 2.9672 | 681.45 |\\n\\nAdditionally, in Table 1, we presented PG-Diff with both Importance Weight and Residual Correction as well as two ablation studies PG-Diff w/o Cor (Diffusion model with Importance Weight only) and PG-Diff w/o IW (Diffusion model with Residual Correction only). The results from PG-Diff and PG-Diff w/o Cor consistently show that Importance Weight reduces the L2 loss by a margin and Residual Correction crucially aids in physical consistency. Thus, we believe Importance Weight is a useful module.\\n\\n### **[Diffusion Model Guided Generation Literature]**\\nThank you for pointing out other related work in diffusion model guided generation. We would like to clarify our differences with these works.\\n\\n-[3, 4] apply guidance to the noised samples, and [6] incorporate physical guidance in the score function. Our guidance is applied to the **predicted cleaned sample**. After correcting the predicted cleaned sample, we use it to calculate the noised sample for the next timestep. [1] has proposed a similar approach to apply a single step of gradient descent to the noised sample at each backward step as a baseline model, but the performance is inferior to the conditional diffusion model, which implies that it is also inferior to PG-Diff.\\n\\n-The proposed solution in [5] to solve the data subproblem either uses an analytical solution or a single step of gradient descent (Equation 13). Since analytical solutions in our problem are unavailable, we leverage **multiple steps** of gradient descent with **momentum and adaptive scaling** at **selected backward diffusion steps**. We notice that a single step of gradient descent is insufficient to minimize the residual. Thus, we use Adam to consider momentum and adaptive scaling to perform multiple gradient descent steps. In addition, differing from [3,4,5], our guidance is applied at selected backward diffusion steps. Our experiments in 4.5 PHYSICAL GUIDANCE reveal that even though an excessive amount of corrections can minimize the residual, it compromises L2 loss. Thus, we apply two corrections at the start and two corrections at the end to achieve optimal balance between physical consistency(Residual) and predictive accuracy(L2 Loss).\\n\\nWe have revised our manuscript to cite and discuss these related works.\\n\\n### **[High-Fidelity Data Grid]**\\nWe apologize for the oversight when drafting the paper. The high-fidelity data generation follows [1] and uses the pseudo-spectral solver from [2]. The direct numerical simulation was performed on a 2048x2048 discretization grid and then we uniformly downsample the data to 256x256 as our high-fidelity data. We have updated our manuscript to correctly describe the high-fidelity data generation.\\n\\n[1] A physics-informed diffusion model for high-fidelity flow field reconstruction.\\\\\\n[2] Physics-informed neural operator for learning partial differential equations.\\\\\\n[3] Diffusion Posterior Sampling For General Noisy Inverse Problems.\\\\\\n[4] DiffusionPDE: Generative PDE-Solving Under Partial Observation.\\\\\\n[5] Denoising Diffusion Models for Plug-and-Play Image Restoration.\\\\\\n[6] On conditional diffusion models for PDE simulations.\"}", "{\"comment\": \"Thank you for taking the time to review our rebuttal and for raising the score. We greatly appreciate your thoughtful feedback and support.\"}", "{\"summary\": \"This paper proposes a diffusion model designed to reconstruct high-fidelity computational fluid dynamics (CFD) data from low-fidelity solver-generated inputs. Traditional machine learning models for CFD rely on low-fidelity data artificially downsampled from high-fidelity sources, which limits performance in real-world applications. This paper trains on the data run at two different grid sizes and proposes two directions to improve upon this:\\n\\n- Importance Weight Strategy during training: It uses wavelet transformation to assign importance to high-frequency flow field components, guiding the model toward better reconstruction of detailed structures.\\n- Residual Correction during inference: This physics-informed module applies corrections based on governing equations (e.g., Navier-Stokes) to ensure physical accuracy, especially for turbulent flows.\\n\\nThe model is evaluated on four datasets, generated by the incompressible Navier-Stokes equations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper points out an important issue with training ML models for super-resolving computational fluid dynamics (CFD) which is when running CFD at two different grid sizes, the dynamics between them can diverge and will not correspond to an interpolation of each other. This paper argues that one should train ML models from simulations run at different resolutions.\\n\\nThe paper is very nicely written. There is a sufficient number of figures to explain the core ideas. \\n\\nI found the idea of using residual error correction an interesting development that makes sure that the generated high-fidelity solution matches with the underlying PDE.\", \"weaknesses\": \"The weighting function in 6 seems arbitrary and hacky. There is no intuitive explanation of why this form of weighting mechanism is needed. Note that this mechanism should be discussed in the context of prior work such as simple diffusion [1] who also suggest reweighting the objective based on the resolution of the data.\\n\\nThe paper discusses the fact that running PDEs at different resolutions will not correspond to the same solutions that are interpolated. However, it is known that running PDEs at different grid sizes will eventually diverge over time to the point that we cannot recover the high-fidelity solution from the low-fidelity input. This work does not discuss how one can correct the low-fidelity simulation with the information received from the high-fidelity solution. \\n\\n\\n[1] Simple diffusion: End-to-end diffusion for high resolution images, by Hoogeboom.\", \"questions\": \"In Section 3, it is stated that the reverse process is started from $x_t{_{guide}}$. How do you choose t here? How do you ensure that adding noise does not wash out the necessary signal for high-fidelity output generation?\\n\\nR introduced in Algorithm 1 is central to the residual correction idea but it has not been defined properly. \\n\\nI am a bit surprised to see that the diffusion model outperforms conditional diffusion models in table 1. Several works including [2] have used conditional diffusion models for this? How is the diffusion baseline set up?\\n\\nTable 1 is very hard to read with all scientific number notation (with e). I would recommend using fixed decimal points.\\n\\n\\n\\n[2] Residual Corrective Diffusion Modeling for Km-scale Atmospheric Downscaling, Mardani et al. 2023\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Request for Follow-Up Feedback on Author Rebuttal\", \"comment\": \"Dear Reviewer Vjib,\\n\\nOur detailed rebuttal has been submitted, and we have thoroughly addressed all the points and suggestions you raised. We understand the significant workload involved in reviewing papers, but we kindly request your feedback on our responses to ensure that the discussions are as productive and comprehensive as possible.\\n\\nAs the discussion phase is comping to an end, we have so far received feedbacks from many others, and would like to get your insights in refining the final version of our work. We believe there're some misunderstanding and make our claims in the above rebuttal, along with new supporting experiment results. We sincerely hope you can provide us feedbacks.\"}", "{\"title\": \"Follow-up the discussion\", \"comment\": \"Dear Reviewer qB2f,\\n\\nWe greatly appreciate your time and feedback on our work. We have carefully addressed your comments and clarified potential misunderstandings. Additionally, we also included new experimental results to corroborate our findings.\\n\\nWe kindly invite you to revisit our paper in light of these updates and clarifications. We would greatly appreciate it if you could consider whether our responses warrant a reevaluation of your rating.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Thanks for the rebuttal\", \"comment\": \"As other reviewers pointed out, the problem of diverging PDE solutions at different scales has already been discussed in previous papers. The idea of reweighting different resolutions and using wavelet-based decomposition was explored in prior works such as simple diffusion. Given these, this paper does not propose a significantly new perspective on the problem, and as a result, I've decided to reduce my rating to 5 from 6.\"}", "{\"title\": \"Response to Reviewer Vjib\", \"comment\": \"We would like to thank the reviewer for the feedback. In what follows, we hope to address any concerns you might have.\\n\\n### **[Motivation and Problem]**\\nOur work is motivated by the observation that existing works on CFD super resolution typically assume that low-fidelity data is artificially downsampled from high-fidelity sources. However, in real-world scenarios, low-fidelity data is generated by numerical solvers, after which machine learning models upsample it to high-fidelity. This approach aims to produce high-fidelity data more efficiently than direct numerical simulation through numerical solvers.\\n\\nWe address the problem that existing models struggle to recover fine-grained high-fidelity details when evaluated on solver-generated low-fidelity data. To improve reconstruction of detailed and accurate structures, we propose the Importance Weight module to guide the diffusion model during training. We also develop a novel, training-free Residual Correction applied exclusively during inference to ensure physical coherence.\\n\\nWe have revised our manuscript to clearly state the motivation and the problem in the Introduction.\\n\\n### **[Real World Applications]**\\nOur method aims to accelerate the generation of high-fidelity CFD data by using numerical solves to first generate less expensive low-fidelity data, and then upsample it to high-fidelity using machine learning models. Low-fidelity data generated from less accurate numerical simulation would be the input in real world applications. For fluid dynamic data collected from the real world, there is no dataset with both high-fidelity and low-fidelity pairs. We artificially inject Gaussian noise to our solver generated low fidelity data to mimic real world fluid scenarios. Since the vorticity data are continuous, shot noise is not applicable. We inject N(0,1) noise to 64x64 data and N(0,3) noise to 32x32 data. The experiment results are summarized in the following tables. PG-Diff can still outperform baselines by a margin.\\n\\n### Kolmogorov Flow with Gaussian Noise\\n\\n| Model | 4x Upsampling | | 8x Upsampling | |\\n|-----------------------|--------------------|---------------------|--------------------|---------------------|\\n| | **L2** | **Residual** | **L2** | **Residual** |\\n| **Bicubic** | 3.1657 | 2117.09 | 5.4191 | 8100.92 |\\n| **CNN** | 2.2908 | 1154.44 | 3.1189 | 1416.51 |\\n| **GAN** | 2.9324 | 380.81 | 3.0193 | 1746.95 |\\n| **Diffusion** | 1.7959 | 404.00 | 3.1142 | 281.30 |\\n| **Conditional Diffusion** | 1.799 | 229.54 | 3.1380 | 110.21 |\\n| **PG-Diff** | **1.7558** | **30.2461** | **2.8986** | **44.3356** |\\n\\n### McWilliams Flow with Gaussian Noise\\n\\n| Model | 4x Upsampling | | 8x Upsampling | |\\n|-----------------------|--------------------|---------------------|--------------------|---------------------|\\n| | **L2** | **Residual** | **L2** | **Residual** |\\n| **Bicubic** | 2.2729 | 521.15 | 4.1528 | 4435.18 |\\n| **CNN** | 1.6117 | 187.16 | 2.1800 | 781.93 |\\n| **GAN** | 1.6255 | 439.84 | 2.3673 | 2917.50 |\\n| **Diffusion** | 1.2995 | 51.66 | 2.2624 | 240.42 |\\n| **Conditional Diffusion** | 1.3114 | 9.13 | 2.3402 | 53.69 |\\n| **PG-Diff** | **1.2543** | **6.25** | **2.0374** | **5.55** |\\n\\n### **[Visual Results]**\\nPG-Diff has demonstrated superior results in reconstructing fine-grained details especially in Kolmogorov Flow and McWilliams Flow. We have updated Figure 3 to highlight the differences between PG-Diff and other diffusion models. Additionally, we also reported LPIPS to measure the perceptual quality of the reconstructed results. PG-Diff outperforms baseline models in this metric.\"}", "{\"title\": \"Comparison with statistical downscaling references\", \"comment\": \"Dear Reviewer qB2f,\\n\\nAs you have the most enthusiastic review, and given that you suggested the statistical downscaling baseline, how would you compare this work with https://openreview.net/forum?id=5NxJuc0T1P, as both seem to tackle a similar problem (although the language seems to be quite different). \\n\\nIn addition, this work https://journals.ametsoc.org/view/journals/aies/3/2/AIES-D-23-0039.1.xml, also seems relevant for the current paper.\\n\\nThank you again for your time and expertise!\\n\\nBest, \\n\\nAC.\"}", "{\"comment\": \"Thanks for addressing my concerns. I am considering other reviewers' comments carefully before the final score recommendation.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Follow-up the discussion\", \"comment\": \"Dear Reviewer ZSxU,\\n\\nWe greatly appreciate your time and feedback on our work. We have carefully addressed your comments and clarified potential misunderstandings. Additionally, we also included new experimental results to corroborate our findings.\\n\\nWe kindly invite you to revisit our paper in light of these updates and clarifications. We would greatly appreciate it if you could consider whether our responses warrant a reevaluation of your rating.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"summary\": \"This paper aims to improve on prior super-resolution works by introducing a new importance weighting and training-free correction method to train a diffusion model. The paper presents experiments on 4 benchmarks, and shows improved performance over baselines.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper has a few key strengths: (1) the model improves over baseline models, (2) the benchmarks are a challenging and convincing set of problems, (3) the DWT-based importance sampling seems to work well, and is well-motivated, (4) the results and ablation studies are well done and provide good insight.\", \"weaknesses\": \"Unfortunately, this paper also has significant flaws that lead me to doubt its novelty and contributions, and as a result, this paper should be rejected in its current state. The main concern is that many of the claims made in the paper are either not well-supported or are false.\\n\\n(1) Section 1, Introduction: \\u201cWe study a novel problem on reconstructing high-fidelity flow fields with solver-generated low-fidelity data, benefiting real-world applications. Our experiments reveal that state-of- the-art reconstruction models fail to generate high-quality outputs due to large distribution shifts between low- and high-fidelity data.\\u201d\\n\\nThe first part of this claim, which is central to the paper, is not true. Upsampling solver-generated low-fidelity data has been both posed as well as solved before [1, 2]. While the provided references are not exhaustive, prior work has considered the limitation of super-resolution works downsampling high-resolution data and have worked directly with solver-generated low-fidelity data. \\n\\nThe second part of this claim does not seem to be well-supported. While it is true that the proposed model outperforms current diffusion models, there is not much evidence for the provided reason being due to a distribution shift. In fact, one of the main insights presented in the cited prior work from Shu et al. [3] is that by noising data samples, they all approach the same distribution regardless of the original distribution being from high or low fidelity data. By that argument, diffusion models should be agnostic to whether the guide data comes from downsampled high-resolution simulations or low-resolution simulations, since when fully noised these samples are drawn from nearly identical distributions. \\n\\n(2) Section 4.6, Model Generalization: \\u201cWe observe that PG-Diff generalizes well even beyond its training distribution\\u201d\\n\\nThe spatial domain size variations don\\u2019t seem to generate samples outside of the training distribution, since the smaller domains are a subset of the larger training domain. The other test cases could be out of the training distribution, but even then, using a smaller dt shouldn\\u2019t significantly alter the data distribution. \\n\\n(3) Section 4.3, Runtime Comparison:\\u201cPG-Diff is considerably faster than the time required to produce high- fidelity data directly through numerical solver\\u201d\\n\\nFollowing one of the cited papers from McGreivy & Hakim [4], the runtime comparison with a numerical solver is not faithfully done. In particular, the numerical solver should be coarsened until it achieves a similar L2 error as the proposed method. The runtime of the coarsened solver should be compared with the proposed method instead of the zero-error, ground truth numerical runtime. \\n\\nThere are a few more minor concerns about the work. \\n\\n(1) Section E.4: The argument about LPIPS being a suitable metric is not empirically or theoretically sound. The given reasoning about ImageNet containing multiscale features and textures leading to it generalizing to CFD applications is somewhat hand-wavy and without providing evidence, it is challenging to claim that a model trained on ImageNet would be \\u201ca robust metric\\u201d for physics simulations.\\n\\n(2) Multiscale Evaluation: The authors claim to achieve best or near-best results on all subdomains but there are results where the proposed method is not the best, especially in the Appendix Figures 11, 12, 13.\\n\\n(3) Residual Correction During Inference: While the technique improves performance, it seems highly dependent on hyperparameters. In particular, there is reduced performance when using more correction steps, likely due to the data being corrected to samples outside of the model\\u2019s training distribution. It seems odd that a core method of the paper asymptotically reduces performance (i.e., as more compute/refinement is done, the worse the model gets).\", \"questions\": \"Do you have any intuition as to why the model trained on Re=2000 performs worse than the model trained on the original data when evaluated on a Re=2000 test case? (Table 3)\\nIs there a reason why the bicubic upsampling is quite slow? (Table 8)\\n\\nAs a whole, the main contributions of the paper seem to be an importance weighting and correction step during inference, which seems to be an incremental improvement on a prior work from Shu et al. [3].\\n\\nRajat Kumar Sarkar, Ritam Majumdar, Vishal Jadhav, Sagar Srinivas Sakhinana, Venkataramana Runkana. Redefining Super-Resolution: Fine-mesh PDE predictions without classical simulations. https://arxiv.org/abs/2311.09740\\nFrancis Ogoke, Quanliang Liu, Olabode Ajenifujah, Alexander Myers, Guadalupe Quirarte, Jonathan Malen, Jack Beuth, Amir Barati Farimani. Inexpensive high fidelity melt pool models in additive manufacturing using generative deep diffusion. https://www.sciencedirect.com/science/article/pii/S0264127524005562\\nDule Shu, Zijie Li, Amir Barati Farimani. A physics-informed diffusion model for high-fidelity flow field reconstruction. https://www.sciencedirect.com/science/article/pii/S0021999123000670\\nNick McGreivy, Ammar Hakim. Weak baselines and reporting biases lead to overoptimism in machine learning for fluid-related partial differential equations. https://www.nature.com/articles/s42256-024-00897-5\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-up the discussion\", \"comment\": \"Dear Reviewer 2FBP,\\n\\nWe greatly appreciate your time and feedback on our work. We have carefully addressed your comments and clarified potential misunderstandings. Additionally, we also included new experimental results to corroborate our findings.\\n\\nWe kindly invite you to revisit our paper in light of these updates and clarifications. We would greatly appreciate it if you could consider whether our responses warrant a reevaluation of your rating.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"summary\": \"This paper proposes PG-Diff which uses diffusion model to generate high-fidelity computational fluid dynamics (CFD) data when given low-fidelity CFD data. To make it works, during the training phase, the authors introduce \\\"Importance Weight\\\" to the loss function; during the inference phase, the authors introduce \\\"Residual Correction\\\" as physical guidance.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"This paper leverages diffusion model to solve problems in fluid dynamics. One advantage of this current version is that the authors incorporate physical information to guide diffusion model. In addition, the authors also conduct detailed analysis on important hyper-parameters and model generalization.\", \"weaknesses\": \"There are several weaknesses for this current version.\\n1) The motivation and problem setting are not clear. I would suggest the authors re-organize the \\\"Introduction\\\" to clearly sate the problems as well as the challenges.\\n2) The overall presentation is a little bit messy.\\n3) The experiments are only conducted on simulated datasets; how this proposed method would work in a real-world application? For example, what will happen if the low-fidelity CFD data containing nosies such as Gaussian noise/shot noise? Will this proposed method still work?\\n4) Based on the visual results, the proposed method does not show superior performance compared to other diffusion models.\", \"questions\": \"I list my questions in \\\"weaknesses\\\" section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
EZXXJmuMd7
Textual Aesthetics in Large Language Models
[ "Lingjie Jiang", "Shaohan Huang", "Xun Wu", "Furu Wei" ]
Image aesthetics is a crucial metric in the field of image generation. However, textual aesthetics has not been sufficiently explored. With the widespread application of large language models (LLMs), previous work has primarily focused on the correctness of content and the helpfulness of responses. Nonetheless, providing responses with textual aesthetics is also an important factor for LLMs, which can offer a cleaner layout and ensure greater consistency and coherence in content. In this work, we introduce a pipeline for aesthetics polishing and help construct a textual aesthetics dataset named TEXAES. We propose a textual aesthetics-powered fine-tuning method based on direct preference optimization, termed TAPO, which leverages textual aesthetics without compromising content correctness. Additionally, we develop two evaluation methods for textual aesthetics based on text and image analysis, respectively.Our experiments demonstrate that using textual aesthetics data and employing the TAPO fine-tuning method not only improves aesthetic scores but also enhances performance on general evaluation datasets such as AlpacalEval and Anera-hard.
[ "Large Language Model", "Textual Aesthetics" ]
https://openreview.net/pdf?id=EZXXJmuMd7
https://openreview.net/forum?id=EZXXJmuMd7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zx3ebiWK0J", "xVljZJkd0x", "uJgP3FTWdI", "s2qjX4GOaQ", "rgnqXiKL2u", "r2cwRpTuK2", "qzR5XY7A2b", "o09ajPy0rx", "kdHFWlsz8m", "jYrLLDrdqW", "ibvwxqjpAf", "eYIir6MXth", "cbj1T2whfp", "WuS6tzOEul", "WsAH2RNpbl", "VGd4ouSsfP", "UkrCmJuth9", "RcrHMIsyR0", "JPJJSZZ6hK", "CjqQqMsWTP", "6Yn4bZgrld", "4gwboZeXwz", "3JCRaYsu5b" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730678675755, 1731898110424, 1731902994524, 1731903482891, 1731901949571, 1734060042083, 1732504676865, 1732781202942, 1731903965433, 1731897988466, 1732539273651, 1732678376535, 1731901599024, 1730295614111, 1732504256070, 1730618678580, 1732716167533, 1731901055356, 1732501635048, 1731901801795, 1732775894606, 1732547112826, 1731903136903 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6744/Reviewer_uWJT" ], [ "ICLR.cc/2025/Conference/Submission6744/Authors" ], [ "ICLR.cc/2025/Conference/Submission6744/Authors" ], [ "ICLR.cc/2025/Conference/Submission6744/Authors" ], [ "ICLR.cc/2025/Conference/Submission6744/Authors" ], [ "ICLR.cc/2025/Conference/Submission6744/Authors" ], [ "ICLR.cc/2025/Conference/Submission6744/Authors" ], [ "ICLR.cc/2025/Conference/Submission6744/Authors" ], [ "ICLR.cc/2025/Conference/Submission6744/Authors" ], [ "ICLR.cc/2025/Conference/Submission6744/Authors" ], [ "ICLR.cc/2025/Conference/Submission6744/Reviewer_NtqG" ], [ "ICLR.cc/2025/Conference/Submission6744/Authors" ], [ "ICLR.cc/2025/Conference/Submission6744/Authors" ], [ "ICLR.cc/2025/Conference/Submission6744/Reviewer_NtqG" ], [ "ICLR.cc/2025/Conference/Submission6744/Authors" ], [ "ICLR.cc/2025/Conference/Submission6744/Reviewer_sukG" ], [ "ICLR.cc/2025/Conference/Submission6744/Reviewer_NtqG" ], [ "ICLR.cc/2025/Conference/Submission6744/Authors" ], [ "ICLR.cc/2025/Conference/Submission6744/Reviewer_uWJT" ], [ "ICLR.cc/2025/Conference/Submission6744/Authors" ], [ "ICLR.cc/2025/Conference/Submission6744/Authors" ], [ "ICLR.cc/2025/Conference/Submission6744/Authors" ], [ "ICLR.cc/2025/Conference/Submission6744/Authors" ] ], "structured_content_str": [ "{\"summary\": \"In this paper, the authors tackle the problem of textual aesthetics in LLMs. To this end, they introduce a new dataset named TEXAES. In addition, they proposed Textual Aesthetics Preference Optimization (TAPO) method. The experiments show that the proposed method performs well on the benchmark datasets.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is well organized.\", \"The problem of textual aesthetics in LLMs is interesting.\"], \"weaknesses\": \"-I have doubts about the textual aesthetics scores. The scores should be decided by human, not by ChatGPT.\\n-The proposed textual aesthetics-powered training actually aims to predict the scores as close as ChatGPT, not human. \\n-The authors did mention 3 evaluators, 2 graduate students and one professor. First, the number of evaluators is too small. Second, there is no information about the evaluations, for example, age, background, first language, and expertise.\", \"questions\": \"Why did the authors use ChatGPT to provide aesthetics scores? Why not human?\\n\\nWhy is there no information about the evaluators? Is it fair for the data collection?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors (2/2)\", \"comment\": \"> **Q4**: The proposed textual aesthetics-powered training actually aims to predict the scores as close as ChatGPT, not human.\\n \\n**A4:** Our proposed textual aesthetics-powered training is designed to improve the responses of large language models by enhancing their readability and comprehensibility, ensuring the outputs are easier to read, understand, and interact with. The ultimate goal is to align the models\\u2019 responses with human textual aesthetics preferences.\\n\\nTo achieve this, we employ GPT-4o as a cost-effective and scalable evaluator for text aesthetics. As shown in Figure 2 and Table 5 of the main text, GPT-4o\\u2019s Text-Based and Image-Based Text Aesthetic Scoring methods demonstrate considerable consistency with human textual aesthetics preferences. This consistency indicates that GPT-4o can effectively reflect human-like scoring tendencies in textual aesthetics evaluations.\\n\\nBy leveraging GPT-4o, we are able to process large datasets efficiently, ensuring that the training aligns with human preferences while maintaining scalability. While GPT-4o is used as an evaluator in our framework, the ultimate aim remains rooted in improving the aesthetics of LLM outputs to meet human textual preferences.\\n\\n---\\nWe want to express our sincere gratitude for your review. If you have any further questions or concerns, please feel free to contact us at any time. We are always available and look forward to further discussions with you. :)\\n\\nBest regards,\\n\\nAll Authors\"}", "{\"title\": \"Rebuttal by Authors (1/4)\", \"comment\": \"We are grateful to the Reviewer for the extensive review. We address your questions point by point below.\\n\\n> **Q1: The paper does not elaborate on the quantitative standards for textual aesthetics, that is, what constitutes good textual aesthetics, and how the consistency among human evaluators in textual aesthetics assessment is ensured.**\\n \\n**A1**: As described in Section 3.3, textual aesthetics are evaluated across four fundamental dimensions: clarity (ease of comprehension), layout (visual organization), uniformity (consistent formatting), and coherence (logical structure). These dimensions are assessed together, resulting in a comprehensive Likert-scale evaluation (1\\u20135) during pairwise comparisons against a baseline model (GPT-4-0314). For robustness, the Bradley-Terry model is utilized to aggregate these pairwise comparisons into a unified ranking, ensuring consistency and quantitative evaluation of textual aesthetics.\\n\\nTo ensure consistency among human evaluators, we implemented a rigorous preparation process. Before starting the annotation task, evaluators underwent a learning phase to familiarize themselves with the definitions and evaluation criteria for textual aesthetics. They were instructed to comprehensively analyze and compare text samples across the four dimensions (clarity, layout, uniformity, and coherence) to assign scores (win/tie/loss) for textual aesthetics between models. Each annotator was tested on a subset of the dataset to confirm their understanding of these definitions prior to commencing the full annotation task.\\n\\nAdditionally, consistency among evaluators was verified through an inter-annotator agreement study (reported in Section 6.5, Table 5), which showed an average agreement rate of 78.84% among annotators which is comparable to those observed in previous human evaluations consistency(MT-Bench[1], Ultrafeedback[2]), confirming that evaluators consistently applied the evaluation criteria. This process ensures reliable and consistent textual aesthetics assessment.\\n\\n---\\n\\n**Reference**\\n\\n[1] Zheng, Lianmin, et al. \\\"Judging llm-as-a-judge with mt-bench and chatbot arena.\\\" NeurIPS 2023.\\n\\n[2] Cui, Ganqu, et al. \\\"Ultrafeedback: Boosting language models with scaled ai feedback\\\"\\u00a0ICML 2024.\"}", "{\"title\": \"Rebuttal by Authors (3/4)\", \"comment\": \"> **Q3: The paper does not clearly explain how to determine if texts with better textual aesthetics are better understood, and how to ensure that human evaluators are making judgments based on content comprehension rather than just neat formatting.**\\n \\n**A3**: Thank you for your detailed feedback, and we apologize for not providing a more thorough explanation of the cases in the main text due to space constraints. Below, we address your concerns by discussing specific examples from Figure 4 of main text, explaining how to determine if texts with better textual aesthetics are better understood. In the latest version of the paper, we will include the detailed explanation.\\n\\nRegarding **case 1** at the top of Figure 4 of main text, the example focuses on generating a mnemonic for the Kanji meaning \\\"Wish,\\\" using the primitives \\\"clock\\\" and \\\"heart.\\\" LLaMA-3.1-8B-TAPO demonstrates significant improvements in textual aesthetics by providing multiple mnemonic options, each clearly separated and thoughtfully worded. This structured presentation enhances the layout and logical organization of the response, making it easier for learners to navigate and choose the mnemonic that resonates most with them. The flexibility offered by multiple options ensures that the text is not only visually appealing but also functionally effective, allowing readers to better understand and retain the information. In contrast, the single mnemonic provided by LLaMA-3.1-8B-Instruct lacks the same level of clarity and variety, making it less engaging and harder to interpret.\\n\\nRegarding **case 2** in the center of Figure 4, the example examines the text involving the term bug. The response provided by LLaMA-3.1-8B-TAPO enhances readability by emphasizing each occurrence of the term bug with formatting that makes it immediately clear which part of the text each explanation pertains to. This formatting choice allows readers to quickly identify the specific context and meaning of each instance of bug. Additionally, the use of concise phrases aligned with the numbered list structure further improves the layout and clarity of the output. Therefore, **case 2** is easier to understand.\\n\\nFor **case 3** (as previously discussed in **A2**), the folk-style melody output by LLaMA-3.1-8B-Instruct suffers from fragmented line breaks, splitting logical sequences of notes into disjointed segments. This diminishes the readability, visual organization, and coherence of the output, making it challenging for users to interpret and perform the melody accurately. In contrast, LLaMA-3.1-8B-TAPO provides a well-structured output with appropriate line breaks and logical grouping of notes. This improves clarity, layout, and logical structure, making the melody easier to read and more user-friendly for musical performance. Such thoughtful formatting aligns with the principles of textual aesthetics and ensures the usability of the text for its intended purpose.\\n\\nWe hope this explanation helps you better understand how texts with better textual aesthetics can improve readability and comprehension.\\n\\n> **Q4: How to ensure that human evaluators are making judgments based on content comprehension rather than just neat formatting.**\\n \\n**A4**: To ensure that human evaluators focus on content comprehension rather than being influenced solely by neat formatting, we implemented the following measures. \\n\\n**Training and Familiarization:** Before starting the annotation task, evaluators underwent a learning phase to familiarize themselves with the definitions and evaluation criteria for textual aesthetics. They were instructed to comprehensively analyze and compare text samples across the four dimensions (clarity, layout, uniformity, and coherence) to assign scores (win/tie/loss) for textual aesthetics between models. Each annotator was tested on a subset of the dataset to confirm their understanding of these definitions prior to commencing the full annotation task.\\n\\n**Clear Evaluation Criteria:** we use **well-defined evaluation criteria** aligned with the four dimensions of textual aesthetics: clarity (ease of comprehension), layout (visual organization), uniformity (consistent formatting), and coherence (logical structure). Evaluators are instructed to assess these dimensions holistically while ensuring that content comprehension remains their primary focus.\"}", "{\"title\": \"Rebuttal by Authors (4/4)\", \"comment\": \"> **Q5:** Generalizability to Other LLMs: The experiments focused on LLaMA series models. What are the authors' expectations or initial findings regarding the generalizability of the proposed methods (TEXAES and TAPO) to other LLMs? Have any preliminary tests been conducted on different models?\\n \\n**A5:** This is a meaningful suggestion. To investigate the generalizability of our proposed methods (TEXAES and TAPO) to other LLMs, we conducted additional experiments on two different models: Qwen2-7B-Instruct and Mistral-7B-Instruct-v0.3. We used TEXAES as the training dataset and applied TAPO for training under the same settings as LLaMA-3.1-8B-Instruct. The results are summarized in Table A.\\n\\nBoth models demonstrated significant improvements in textual aesthetics and general response capabilities after training with TAPO. These results are consistent with our findings for the LLaMA-3.1 models, providing strong evidence for the generalizability of TEXAES and TAPO across diverse LLM architectures.\\n\\nIn the latest version of the paper, we will include the experimental results and analyses for Qwen2-7B and Mistral-7B, further validating the robustness and applicability of our proposed methods.\\n\\n**Table A. Performance Evaluation of Qwen2-7B and Mistral-7B Models After Training with TEXAES and TAPO**\\n\\n| Model | TA Text WR (%) | TA Image WR (%) | AlpacaEval LC WR (%) | Arena-Hard WR (%) | MT-Bench Avg. Score | MMLU 5-shot (%) |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| **Qwen2-7B-Instruct** | 24.63 | 39.40 | 33.43 | 27.69 | 7.48 | 70.46 |\\n| **Qwen2-7B-Instruct + DPO ($y_t$, $y_l$)** | 33.84 | 61.23 | 40.16 | 25.30 | 7.19 | 70.34 |\\n| **Qwen2-7B-Instruct + TAPO** | **37.99** | **64.28** | **40.27** | **32.40** | **7.48** | **70.49** |\\n| **Mistral-7B-Instruct-v0.3** | 8.26 | 28.90 | 29.87 | 17.13 | 6.59 | 61.52 |\\n| **Mistral-7B-Instruct-v0.3 + DPO ($y_t$, $y_l$)** | 25.59 | 54.64 | 36.78 | 20.83 | 6.56 | 61.36 |\\n| **Mistral-7B-Instruct-v0.3 + TAPO** | **28.55** | **57.84** | **38.53** | **23.10** | **6.80** | **61.55** |\\n\\n----\\nWe want to express our sincere gratitude for your review. We apologize for the delayed response, as we have been dedicating an extended amount of time to conducting experiments, which has kept you waiting. Please let us know if any of your points were not addressed properly, or if you have any additional questions.\\n\\nBest regards,\\n\\nAll Authors\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear Reviewer sukG,\\n\\nWe sincerely appreciate your constructive feedback on our manuscript. Guided by your insightful suggestions, we have included experiments in Section 6.1 and Appendix H of the revised manuscript to evaluate the generalizability of TEXAES and TAPO on other LLMs.\\n\\nThank you once again for your time and valuable guidance. If you have any further questions or concerns, please feel free to contact us at any time.\\n\\nSincerely,\\n\\nAll Authors\"}", "{\"title\": \"Response to Reviewer uWJT\", \"comment\": \"Thank you for your feedback! We appreciate your suggestion, and we have added the details of human annotation in Section 6.5 and Appendix G. Regarding your concerns, we address your feedback point by point below.\\n>**C1:** Use of GPT-4 for Aesthetic Scoring\\n\\n As outlined in our previous response (**A1**), we selected GPT-4 for its demonstrated consistency, scalability, and cost-effectiveness, making it a practical choice for large-scale aesthetic evaluations. The consistency between GPT-4o and human evaluators, as detailed in the Annotation Consistency analysis (68.67% agreement on textual aesthetics preferences), supports GPT-4's ability to effectively mirror human aesthetic preferences. This level of agreement is consistent with similar studies, such as MT-Bench[1] (66%) and UltraFeedback[2] (59.7%). These results validate the reliability of using GPT-4o for evaluating textual aesthetics.\\n\\n>**C2:** Consistency of Human Evaluators\\n\\nAs described in Section 3.3, textual aesthetics are evaluated across four fundamental dimensions: clarity (ease of comprehension), layout (visual organization), uniformity (consistent formatting), and coherence (logical structure). These dimensions are assessed together, resulting in a comprehensive Likert-scale evaluation (1\\u20135) during pairwise comparisons against a baseline model (GPT-4-0314). For robustness, the Bradley-Terry model is utilized to aggregate these pairwise comparisons into a unified ranking, ensuring consistency and quantitative evaluation of textual aesthetics.\\n\\nTo ensure consistency among human evaluators, we implemented a rigorous preparation process. Before starting the annotation task, evaluators underwent a learning phase to familiarize themselves with the definitions and evaluation criteria for textual aesthetics. They were instructed to comprehensively analyze and compare text samples across the four dimensions (clarity, layout, uniformity, and coherence) to assign scores (win/tie/loss) for textual aesthetics between models. Each annotator was tested on a subset of the dataset to confirm their understanding of these definitions prior to commencing the full annotation task.\\n\\nAdditionally, consistency among evaluators was verified through an inter-annotator agreement study (reported in Section 6.5, Table 5), which showed an average agreement rate of 78.84% among annotators which is comparable to those observed in previous human evaluations consistency(MT-Bench[1], Ultrafeedback[2]), confirming that evaluators consistently applied the evaluation criteria. This process ensures reliable and consistent textual aesthetics assessment.\\n\\n---\\n\\n**Reference**\\n\\n[1] Zheng, Lianmin, et al. \\\"Judging llm-as-a-judge with mt-bench and chatbot arena.\\\" NeurIPS 2023.\\n\\n[2] Cui, Ganqu, et al. \\\"Ultrafeedback: Boosting language models with scaled ai feedback\\\" ICML 2024.\\n\\n---\\nWe sincerely hope that the above response addresses your concerns. If you have any further questions or concerns, please feel free to contact us at any time. We are always available and look forward to further discussions with you.\\n\\nSincerely,\\n\\nAll Authors\"}", "{\"title\": \"Rebuttal by Authors (4/4)\", \"comment\": \"> **Q5: The paper uses GPT-4o for automated assessment, but as a machine, GPT-4o processes only code/symbols/characteristics, and its suitability for evaluating textual aesthetics is questionable.**\\n \\n**A5**: Thank you for your thoughtful feedback and for raising the concern about the suitability of GPT-4o for evaluating textual aesthetics. Below, we address this question by discussing GPT-4o\\u2019s capabilities in understanding textual aesthetics and its consistency with human judgments.\\n\\n- **GPT-4o's Textual Aesthetic Evaluation Capabilities**\\n \\n GPT-4o demonstrates strong text comprehension abilities, allowing it to evaluate textual aesthetics based on the same four dimensions\\u2014clarity, layout, uniformity, and coherence\\u2014that human evaluators consider. These dimensions are central to assessing textual readability and comprehensibility, as described in our work on Text-Based Text Aesthetic Scoring. By processing text holistically, GPT-4o aligns well with the definitions of textual aesthetics used in our framework.\\n Moreover, **GPT-4o's exceptional multimodal capabilities** enable it to evaluate text aesthetics from a visual perspective as well. This ability allows GPT-4o to bridge both textual and visual aspects of text aesthetics. To further illustrate its multimodal evaluation capabilities, we have included a detailed evaluation case in Appendix F.1.\\n- **Consistency with Human Judgments**\\n \\n To validate GPT-4o's suitability for evaluating textual aesthetics, we benchmarked its assessments against human. As shown in Figure 2 and Table 5 of main text, the textual aesthetics win rate rankings of our LLaMA-3.1-8B-TAPO and LLaMA-3.1-70B-TAPO models relative to other open-source models are consistent with the win rate rankings obtained using GPT-4o. In our Annotation Consistency analysis, the agreement between GPT-4o and human judges on textual aesthetics preferences, as detailed in Section 6.5, where the TA Text scores demonstrated a 68.67% agreement rate with human annotators and the TA Image scores exhibited a 64.83% agreement rate, indicates that GPT-4o judges have a high concurrence rate with human judges. This agreement rate is comparable to those observed in previous human evaluations, which reported an average of 66% agreement in MT-Bench[1] and 59.7% in UltraFeedback[2]. This alignment demonstrates that GPT-4o reliably captures the defined dimensions of textual aesthetics and provides scores that closely align with human preferences.\\n\\n---\\n\\n**Reference**\\n\\n[1] Zheng, Lianmin, et al. \\\"Judging llm-as-a-judge with mt-bench and chatbot arena.\\\" NeurIPS 2023.\\n\\n[2] Cui, Ganqu, et al. \\\"Ultrafeedback: Boosting language models with scaled ai feedback\\\"\\u00a0ICML 2024.\\n\\n---\\nWe want to express our sincere gratitude for your review. If you have any further questions or concerns, please feel free to contact us at any time. We are always available and look forward to further discussions with you. :)\\n\\nBest regards,\\n\\nAll Authors\"}", "{\"title\": \"Rebuttal by Authors (1/2)\", \"comment\": \"We are grateful to the Reviewer for the extensive review. We address your questions point by point below.\\n\\n> **Q1:** Why did the authors use ChatGPT for aesthetics scores instead of human evaluators?\\n\\n**A1:** We adopted GPT4o for aesthetics scoring based on both practical and methodological considerations. While human evaluation remains the gold standard for assessing textual aesthetics preferences, it is prohibitively expensive and time-consuming to consistently employ human judges. We chose GPT4o for aesthetics scoring due to its demonstrated consistency and cost-effectiveness in alignment tasks, as seen in related research (e.g., AlpacaEval[1], MT-Bench[2], and Arena-Hard[3]). As shown in Figure 2 of main text, the textual aesthetics win rate rankings of our LLaMA-3.1-8B-TAPO and LLaMA-3.1-70B-TAPO models relative to other open-source models are consistent with the win rate rankings obtained using GPT-4o. In our Annotation Consistency analysis, the agreement between GPT-4o and human judges on textual aesthetics preferences, as detailed in Section 6.5, where the TA Text scores demonstrated a 68.67% agreement rate with human annotators and the TA Image scores exhibited a 64.83% agreement rate, indicates that GPT-4o judges have a high concurrence rate with human judges. This agreement rate is comparable to those observed in previous human evaluations, which reported an average of 66% agreement in MT-Bench[2] and 59.7% in UltraFeedback[4]. These findings suggest that our GPT-4o judges can serve as effective proxies for human preferences in assessing text aesthetics.\\n\\n------\\n\\n**Reference**\\n\\n[1] Dubois, Yann, et al. \\\"Length-controlled alpacaeval: A simple way to debias automatic evaluators.\\\"\\u00a0COLM 2024.\\n\\n[2] Zheng, Lianmin, et al. \\\"Judging llm-as-a-judge with mt-bench and chatbot arena.\\\" NeurIPS 2023.\\n\\n[3] Li, Tianle, et al. \\\"From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and BenchBuilder Pipeline.\\\" arXiv 2024.\\n\\n[4] Cui, Ganqu, et al. \\\"Ultrafeedback: Boosting language models with scaled ai feedback\\\"\\u00a0ICML 2024.\\n \\n> **Q2:** Why is there no information about the evaluators? Is it fair for the data collection?\\n \\n**A2:** Thank you for your suggestion. We recognize the importance of providing detailed information about the evaluators and their process and we will include human evaluators information in the latest version.\\n\\nIn our study, we employed three annotators: two graduate students in computer science and one professor with a background in applied linguistics. All three evaluators are non-native English speakers but are proficient in English. Their diverse academic and linguistic backgrounds provide a balanced perspective for assessing textual aesthetics across the four key dimensions\\u2014clarity, layout, uniformity, and coherence.\\n\\nBefore beginning the annotation process, the evaluators underwent comprehensive training that included case studies and task-specific examples. This training ensured a consistent understanding of the textual aesthetics evaluation criteria and reduced individual interpretation biases. After the training phase, the evaluators were tested on a subset of the dataset to confirm alignment with the evaluation framework and readiness to perform the task.\\n\\nTo ensure fairness and minimize potential biases, the following measures were implemented:\\n\\n- **Standardized Presentation**: Texts were rendered into a uniform visual format, with all information about the originating model removed. This step ensured that annotations focused purely on textual content and aesthetics without being influenced by the source of the text.\\n- **Independent Judgments**: Each annotator performed their evaluations independently, with no communication among annotators during the process. This safeguarded against mutual influence or group bias.\\n- **Balanced Sample Distribution**: The dataset provided to annotators included balanced representations of all models under evaluation, preventing skewed exposure to any specific model.\\n \\n> **Q3**: The number of evaluators is too small\\n \\n**A3**: In line with previous related works such as ImageReward[1] and UltraFeedback[2] where three evaluators were commonly employed for human studies, we believe that using three annotators provides a sufficient and robust basis to support our conclusions. The use of three evaluators is a standard practice in similar studies, ensuring a balance between reliable annotation quality and practical feasibility.\\n\\n---\\n\\n**Reference**\\n\\n[1] Xu, Jiazheng, et al. \\\"Imagereward: Learning and evaluating human preferences for text-to-image generation.\\\" NeurIPS 2024.\\n\\n[2] Cui, Ganqu, et al. \\\"Ultrafeedback: Boosting language models with scaled ai feedback\\\" \\u00a0ICML 2024.\"}", "{\"comment\": \"Thank you for the detailed response, which has addressed some of my concerns. However, I have the following two questions regarding your reply:\\n\\n1. Regarding the clarity (ease of comprehension) of the text, how can we ensure that reviewers genuinely understand the content? I believe a reasonable evaluation approach would involve designing reading comprehension questions based on the content. These could include multiple-choice questions, fill-in-the-blank questions, or short-answer questions to assess the annotators' understanding of the core ideas. This method could help quantify both the time required for comprehension and the accuracy of understanding.\\n\\n2. As for the example at the bottom of Figure 4 concerning the musical notation, I believe it is inappropriate to explain this issue as a matter of textual aesthetics. The musical notation output by LLaMA-3.1-8B-Instruct constitutes a formatting error. Musical notation without syllable division is akin to sentences without punctuation marks\\u2014it is a basic formatting mistake, not a matter of aesthetics.\"}", "{\"title\": \"General Response by Authors\", \"comment\": \"We sincerely thank the reviewers for their thorough feedback and insightful comments.\\n\\nWe have carefully addressed each point raised and incorporated all constructive suggestions into the revised manuscript, with modifications highlighted in blue text.\\n\\nShould any aspects require further clarification, we welcome additional questions.\"}", "{\"title\": \"Rebuttal by Authors (2/4)\", \"comment\": \"> **Q2** The construction of the TEXAES dataset may have limitations. The dataset is built based on a filtered version of UltraFeedback, and there could be potential biases introduced during this process. For example, the responses in UltraFeedback might already have a certain style or pattern that could limit the diversity of the aesthetic preferences captured in TEXAES. How were the responses selected and what criteria were used to ensure a diverse range of aesthetic preferences?\\n \\n**A2:** Thank you for raising this important question and for pointing out the potential limitations in our dataset construction. Below, we address the concerns regarding Response Selection Criteria and Ensuring Diversity of Aesthetic Preferences:\\n\\n**1. Response Selection Criteria**\", \"responses_in_ultrafeedback_were_selected_as_follows\": [\"**Chosen Responses**: For each prompt, the \\\"chosen\\\" responses were selected as the highest-rated completions based on their overall scores, which consider factors like instruction-following, coherence, and helpfulness.\", \"**Rejected Responses**: A randomly chosen response from the remaining three completions served as the \\\"rejected\\\" response, ensuring sufficient variability in the dataset.\", \"**2. Ensuring Diversity of Aesthetic Preferences**\", \"**Diverse Input Prompts in UltraFeedback**: UltraFeedback contains prompts covering a wide variety of domains, topics, and levels of complexity. This inherent diversity provides a strong foundation for constructing the TEXAES dataset, ensuring coverage across different contexts and aesthetic nuances.\", \"**Limitations Due to UltraFeedback**: While UltraFeedback offers diverse input prompts, its stylistic tendencies may influence the aesthetic preferences captured in TEXAES. For example, the dataset's original responses may carry specific patterns or stylistic biases, potentially limiting the overall diversity in textual aesthetics. We acknowledge that these constraints stem from our reliance on UltraFeedback as the base dataset.\", \"**Generalizability of the Aesthetic Text Polishing Pipeline**: Despite these limitations, the aesthetic text polishing pipeline we developed is highly generalizable and not limited to UltraFeedback. The pipeline can be applied to other instruction datasets, allowing for stylistic and structural variations that expand the aesthetic range of the resulting dataset. This flexibility ensures that TEXAES and similar datasets can adapt to diverse input data sources while maintaining high standards of quality and consistency.\", \"**Future Improvements and General Applicability**: To address the limitations and advance TEXAES, our further research includes the following directions:\", \"Applying the polishing pipeline to additional datasets to overcome stylistic biases inherent in UltraFeedback.\", \"Refining the pipeline to better adapt to datasets with varying stylistic and content characteristics, further increasing the range of aesthetic preferences captured.\", \"Exploring novel methods to enhance diversity and quality in aesthetic dataset construction, ensuring broader coverage and general applicability for future research.\", \"By combining UltraFeedback's strengths with our generalizable pipeline, TEXAES represents a significant step forward in advancing research on textual aesthetics while maintaining the flexibility to address potential limitations in future iterations.\"]}", "{\"summary\": \"The paper pioneers the exploration of textual aesthetics in Large Language Models (LLMs). The authors propose an aesthetic refinement process and create a dataset named TEXAES. Based on this dataset, they introduce a textual aesthetics optimization fine-tuning method called TAPO, which aims to enhance textual aesthetics without compromising content accuracy. Additionally, the paper develops two textual aesthetics assessment methods, one based on text analysis and the other on image analysis, to evaluate the aesthetics of LLM outputs. Experimental results indicate that using the TEXAES dataset and TAPO fine-tuning method not only improves aesthetic scores but also enhances model performance on general evaluation datasets such as AlpacalEval and Anera-hard.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is the first to investigate textual aesthetics in LLMs, introducing the TEXAES dataset and TAPO fine-tuning method, providing a new direction for the aesthetic optimization of LLMs.\\n2. The paper empirically validates the effectiveness of the TEXAES dataset and TAPO method, demonstrating not only improved aesthetic scores but also enhanced model performance.\\n3. The paper develops two assessment methods based on text and image analysis, offering tools for a comprehensive evaluation of the aesthetics of LLM outputs.\", \"weaknesses\": \"1. The paper does not elaborate on the quantitative standards for textual aesthetics, that is, what constitutes good textual aesthetics, and how the consistency among human evaluators in textual aesthetics assessment is ensured.\\n2. From the examples shown in Figure 4, the paper's concept of textual aesthetics seems to involve only line breaks, bold fonts, and highlighting key points, which are relatively simple and may lack long-term research value.\\n3. The paper does not clearly explain how to determine if texts with better textual aesthetics are better understood, and how to ensure that human evaluators are making judgments based on content comprehension rather than just neat formatting.\\n4. The paper uses GPT-4o for automated assessment, but as a machine, GPT-4o processes only code/symbols/characteristics, and its suitability for evaluating textual aesthetics is questionable.\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear Reviewer NtqG,\\n\\nWe sincerely appreciate your constructive feedback on our manuscript. Guided by your insightful suggestions, we have added the details of human annotation in Section 6.5 and Appendix G, as well as a detailed case explanation in Section 7 to demonstrate how texts with better textual aesthetics are better understood. These updates have been included in the revised manuscript.\\n\\nThank you once again for your time and valuable guidance. If you have any further questions or concerns, please feel free to contact us at any time.\\n\\nSincerely,\\n\\nAll Authors\"}", "{\"summary\": \"This paper focuses on textual aesthetics in large language models (LLMs). It first highlights the importance of textual aesthetics, which has been less explored compared to image aesthetics despite the widespread use of LLMs.\", \"contributions\": \"1. Dataset Construction:\\n\\u2022 Developed an aesthetic data generation pipeline leveraging GPT - 4o for aesthetic polishing.\\n\\u2022 Constructed the first aesthetic dataset in the LLM domain, TEXAES, with 50,390 prompts data.\\n2. Fine - Tuning Method:\\n\\u2022 Proposed a textual aesthetics - powered fine - tuning method, TAPO, based on direct preference optimization. It uses the Plackett - Luce model with adjustable optimization weights to better leverage TEXAES and enhance aesthetic fine - tuning performance while preserving general performance.\\n3. Evaluation Methods:\\n\\u2022 Developed two evaluation pipelines for textual aesthetics: one based on text and the other based on images.\\n\\u2022 Validated the effectiveness of TEXAES and TAPO through extensive experiments. Fine - tuned LLaMA series models using TEXAES and TAPO and compared their aesthetic scores with state - of - the - art LLMs. Also, employed human experts for professional evaluation. Results showed improvements in aesthetic scores and general capabilities on some benchmarks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Originality\\n\\u2022 The paper shows originality in addressing textual aesthetics in LLMs, an area that has received less attention compared to image aesthetics. The construction of the TEXAES dataset and the proposed TAPO fine-tuning method are novel contributions.\\nQuality\\n\\u2022 The research methodology appears to be of good quality. The construction of the dataset through an aesthetic polishing pipeline and the use of appropriate evaluation methods (text-based and image-based) demonstrate a systematic approach.\\nClarity\\n\\u2022 The paper is generally clear in its presentation. The introduction effectively sets the context and the importance of textual aesthetics. The methods section explains the dataset construction, fine-tuning method, and evaluation pipelines in a understandable manner.\\nSignificance\\n\\u2022 The work is significant as it fills a gap in the study of LLMs by focusing on textual aesthetics. The proposed techniques have the potential to improve the aesthetic quality of LLM outputs, which can enhance user experience and the usability of these models in various applications.\", \"weaknesses\": \"Dataset Limitations\\n\\u2022 While the construction of the TEXAES dataset is a significant step, it may have limitations. The dataset is built based on a filtered version of UltraFeedback, and there could be potential biases introduced during this process. For example, the responses in UltraFeedback might already have a certain style or pattern that could limit the diversity of the aesthetic preferences captured in TEXAES.\\nEvaluation Complexity\\n\\u2022 The evaluation methods, although comprehensive with text-based and image-based scoring, could be further refined. The use of GPT - 4o as a judge in both text and image evaluations might introduce some subjectivity and reliance on a single model. There could be a need for more diverse evaluation metrics or a more objective way to combine the text and image evaluations to get a more accurate assessment of textual aesthetics.\\nGeneralizability\\n\\u2022 The experiments are mainly focused on the LLaMA series models. It is not clear how well the proposed methods (TEXAES and TAPO) would generalize to other LLMs. There is a need for more extensive experiments across different types of LLMs to demonstrate the broader applicability of the techniques.\", \"questions\": \"1. Dataset Construction Details:\\n\\u2022 Could the authors please provide more details about the filtering process used to create TEXAES from UltraFeedback? How were the responses selected and what criteria were used to ensure a diverse range of aesthetic preferences?\\n2. Evaluation Metric Objectivity:\\n\\u2022 Given that GPT - 4o is used as a judge in both text and image evaluations, how can the authors ensure the objectivity of the evaluation metrics? Are there any plans to explore alternative evaluation methods or to combine multiple evaluation models to reduce subjectivity?\\n3. Generalizability to Other LLMs:\\n\\u2022 The experiments focused on LLaMA series models. What are the authors' expectations or initial findings regarding the generalizability of the proposed methods (TEXAES and TAPO) to other LLMs? Have any preliminary tests been conducted on different models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"1. The authors emphasize that GPT-4 performs comparably to human judgment, aiming to validate its reliability in evaluating textual aesthetics, which I acknowledge and agree with.\\n\\n2. However, I am concerned that the assessment of textual aesthetics should not rely solely on subjective judgment. Rigorous objective evaluation is essential. Without such standards, it remains unclear whether textual aesthetics genuinely enhance comprehension or merely improve visual appeal. \\n\\n3. If the focus is solely on visual appeal, subjective evaluation might suffice, but this limits the broader significance of textual aesthetics research. For meaningful progress, textual aesthetics should aim to improve human understanding, and demonstrating this requires objective evaluation. \\n\\nTherefore, I would like to keep the current score.\"}", "{\"title\": \"Rebuttal by Authors (1/4)\", \"comment\": \"Thank you for your detailed, helpful feedback. We address your feedback point by point below.\\n\\n> **Q1:** Dataset Construction Details: Could the authors please provide more details about the filtering process used to create TEXAES from UltraFeedback?\\n \\n**A1:** Thank you for your thoughtful question regarding the filtering process used to create TEXAES from UltraFeedback. Below, we provide more details about the filtering process used to create TEXAES from UltraFeedback.\\n\\n**1. Dataset Construction and Initial Filtering**\\n\\nThe foundation of TEXAES is the **UltraFeedback Binarized** dataset, which is a filtered version of UltraFeedback as described in Zephyr[1]. This dataset was constructed using the following process:\\n\\n- Each prompt in the UltraFeedback dataset includes four model completions from a range of open-source and proprietary models.\\n- GPT-4 evaluated each completion based on criteria such as helpfulness and honesty to assign an overall score.\\n- The highest-rated completion (based on the overall score) was selected as the **\\\"chosen\\\"** response.\\n- One of the remaining three responses was randomly selected as the **\\\"rejected\\\"** response.\\n\\n**2. Additional Filtering for TEXAES**\\n\\nBuilding upon the UltraFeedback Binarized dataset, two additional filtering steps were applied to refine the dataset for TEXAES:\\n\\n#### **Binary Classification Filtering**\\n\\nAs part of our aesthetic polishing process (described in Sections 3.2 and 5.1), GPT-4o performed a binary classification to determine whether a response required modification to improve readability and comprehensibility.\\n\\n- Responses identified as already aesthetically satisfactory (a total of 5,858 entries) were filtered out and excluded from further processing.\\n- This step ensured that the dataset retained only responses needing enhancement, streamlining subsequent polishing efforts.\\n\\n#### **Length Control Filtering**\\n\\nWe analyzed the lengths of the filtered responses to identify excessive verbosity or unnatural text characteristics. To address this:\\n\\n- Outliers in the length distribution (before and after aesthetic polishing) were excluded.\\n- Only responses within the 90% confidence interval were retained, as detailed in Appendix A.\\n\\nTo validate the impact of length filtering, we conducted an ablation experiment:\\n\\n- A model (LLaMA-3.1-8B-Base) was trained on datasets with and without length filtering using DPO($y_t$, $y_l$).\\n- Results, detailed in Table 9 of Appendix D, demonstrated that length-filtered data significantly improved model performance across all evaluation tasks. Furthermore, length-filtered responses were shorter, more concise, and easier to read, confirming the effectiveness of this filtering step.\\n\\n---\\n\\n**Reference**\\n\\n[1]Tunstall, Lewis, et al. \\\"Zephyr: Direct distillation of lm alignment.\\\"\\u00a0COLM 2024.\"}", "{\"comment\": \"The authors have attempted to address the raised questions. However, I believe the paper still has significant issues regarding the use of GPT-4o for aesthetic scoring, as well as concerns about the consistency of the human evaluators. These issues were also highlighted by another reviewer. Therefore, I would like to keep my original score.\"}", "{\"title\": \"Rebuttal by Authors (3/4)\", \"comment\": \"> **Q3:** Evaluation Metric Objectivity: Given that GPT4o is used as a judge in both text and image evaluations, how can the authors ensure the objectivity of the evaluation metrics?\\n \\n**A3:** Thank you for your thoughtful question regarding the objectivity of the evaluation metrics when using GPT-4o as a judge for both text and image evaluations. Ensuring the fairness and consistency of the evaluation framework is a critical focus of our study. Below, we outline the measures we implemented to ensure objectivity:\\n\\n1. **Standardized Baseline for Pairwise Comparisons**\\n \\n We employed GPT-4-0314 as a standard baseline for all pairwise comparisons. This consistent reference model eliminated variability in the evaluation process and enabled fair comparisons across models by providing a uniform scoring standard for all win rate and aesthetic evaluations.\\n \\n2. **Two-Game Setup to Mitigate Positional Bias**\\n \\n To address potential positional bias in pairwise evaluations, we implemented a two-game setup:\\n \\n - The positions of the compared models were swapped in separate evaluations.\\n - Results from both comparisons were aggregated using the Bradley-Terry model, a statistically robust ranking method that minimizes positional and systemic biases.\\n3. **Evaluation Across Detailed Aspects**\\n\\n GPT-4o\\u2019s evaluations for both text-based and image-based methods were conducted using the same detailed aspects of textual aesthetics. Specifically, each text was analyzed based on the four fundamental components: readability\\u3001visual organization\\u3001consistency\\u3001overall structure. This standardized breakdown ensures that all evaluations are conducted uniformly and fairly across all datasets. A detailed example of GPT-4o\\u2019s evaluation process for text can be found in Appendix F.1.\\n4. **Consistency with Human Preferences**\\n \\n Human evaluation is the gold standard for assessing human\\u2019s textual aesthetics preferences. And in our experiment, the textual aesthetics preferences agreement between GPT-4o and human judges, as detailed in Section 6.5, where the TA Text scores demonstrated a 68.67% agreement rate with the human annotators and the TA Image scores exhibited a 64.83% agreement rate, indicates that GPT-4o judges have a high concurrence rate with human judges. This agreement rate is comparable to those observed in previous human evaluations, which reported an average of 66% agreement in MT-Bench[1] and 59.7% in UltraFeedback[2]. These findings suggest that our GPT-4o judges can serve as objective proxies for human preferences in assessing text aesthetics.\\n \\n\\nBy employing standardized baselines, mitigating positional bias, evaluating detailed aspects uniformly, and aligning with human preferences, we ensured that GPT-4o serves as a reliable and objective evaluator for textual aesthetics.\\n\\n---\\n\\n**Reference**\\n\\n[1] Zheng, Lianmin, et al. \\\"Judging llm-as-a-judge with mt-bench and chatbot arena.\\\" NeurIPS 2024.\\n\\n[2] Cui, Ganqu, et al. \\\"Ultrafeedback: Boosting language models with scaled ai feedback\\\"\\u00a0ICML 2024.\\n \\n> **Q4:** Are there any plans to explore alternative evaluation methods or to combine multiple evaluation models to reduce subjectivity?\\n \\n**A4:** Thank you for your thoughtful question. We recognize the importance of reducing subjectivity in evaluation metrics and ensuring a comprehensive assessment of textual aesthetics. Below, we outline our ongoing plans to explore alternative methods and combine evaluation mod\\n\\n- **Development of Hybrid Evaluation Metrics**\\n \\n We are working on integrating text-based and image-based textual aesthetic scores into hybrid evaluation metrics. This combined approach will evaluate semantic clarity alongside visual organization, providing a more comprehensive and objective assessment of text aesthetics. By leveraging the complementary strengths of each metric, the hybrid system will offer deeper insights into the overall aesthetic quality of texts.\\n \\n- **Training a Multimodal Reward Model**\\n \\n To align evaluations more closely with human preferences, we plan to develop a multimodal reward model that combines text and image scoring. This reward model will be trained using a diverse set of human annotations, capturing a broader spectrum of aesthetic preferences and reducing reliance on a single evaluation system.\\n \\n- **Robustness Checks**\\n \\n We will introduce robustness checks, such as adversarial testing, to evaluate the consistency and reliability of the metrics. This testing will help identify potential weaknesses and improve the resilience of our evaluation framework.\"}", "{\"title\": \"Response to Reviewer NtqG\", \"comment\": \"Thank you for your feedback! We sincerely thank the reviewer for your understanding and acknowledgment regarding the reliability of GPT-4o\\u2019s performance in evaluating textual aesthetics. Regarding your second and third concerns, We address your feedback point by point below.\\n\\n> **C1:** However, I am concerned that the assessment of textual aesthetics should not rely solely on subjective judgment. Rigorous objective evaluation is essential. Without such standards, it remains unclear whether textual aesthetics genuinely enhance comprehension or merely improve visual appeal.\\n\\nDuring the human evaluation process, annotators are instructed to make comprehensive judgments based on four aspects: readability, layout, uniformity, and coherence. The assessment of whether the content is easier to understand is not merely based on visual appeal but rather on the ease of understanding the corresponding answers to specific questions derived from the text. For example, if Text A contains more headings, line breaks, bold fonts, and other visual elements than Text B, but its content is not easier to understand than that of Text B, then Text A would receive a \\\"loss\\\" label compared to Text B. Conversely, if both texts are equivalent in content but one exhibits better visual organization, making the desired information easier to extract, it would receive a \\\"win\\\" label. This approach ensures that annotators genuinely understand the content and that judgments are not solely influenced by visual appeal.\\n\\nOur evaluation methodology aligns with those used in prior human preference annotation tasks such as MT-Bench and Arena-Hard, which rely on similar preference-based annotations. The high internal consistency among human annotators and their strong agreement with GPT4 evaluations\\u2014comparable to those observed in MT-Bench and Ultrafeedback\\u2014validate the reliability of this approach for evaluating textual aesthetics. These results collectively support the adequacy of our method for assessing textual aesthetics and their contribution to comprehension.\\n\\n> **C2:** If the focus is solely on visual appeal, subjective evaluation might suffice, but this limits the broader significance of textual aesthetics research. For meaningful progress, textual aesthetics should aim to improve human understanding, and demonstrating this requires objective evaluation.\\n\\nThank you for raising this important point. As outlined in Rebuttal **A2**, our definition of textual aesthetics explicitly focuses on improving the readability and comprehension of text, rather than solely enhancing its visual appeal. We appreciate your understanding of this core aspect of our research.\\n\\nAs discussed in **C1**, our human evaluation protocol ensures that annotators genuinely assess the content based on its ease of reading and understanding. This is reflected in their scoring, which aligns with the methodology employed in prior studies, demonstrating results comparable to those of established works. Additionally, we have implemented the following measures to ensure the objectivity and fairness of our evaluation process:\\n\\n- **Standardized Presentation**: All texts were rendered in a uniform visual format, and any information about the originating model was removed to eliminate potential biases.\\n- **Independent Judgments**: Annotators performed their evaluations independently, without any communication among them during the process.\\n\\nThese precautions ensure that the evaluations are both rigorous and unbiased. The combination of our well-defined textual aesthetics framework, a robust evaluation protocol, and consistent results substantiates the claim that our methodology effectively assesses textual aesthetics and their contribution to human understanding.\\n\\nWe recognize that other approaches, such as reading comprehension tasks, could further strengthen the evaluation process, and we are actively exploring more objective methods to integrate into future iterations of this research. However, we believe that the current human evaluation setup provides a reliable foundation for assessing textual aesthetics and their broader implications.\\n\\n---\\nWe sincerely hope that the above response addresses your concerns. If you have any further questions or concerns, please feel free to contact us at any time. We are always available and look forward to further discussions with you.\\n\\nSincerely,\\n\\nAll Authors\"}", "{\"title\": \"Response to Reviewer NtqG\", \"comment\": \"Dear Reviewer NtqG,\\n\\nThank you for your response and reviewing our work. We address your feedback point by point below.\\n\\n> **Q1**: Regarding the clarity (ease of comprehension) of the text, how can we ensure that reviewers genuinely understand the content? I believe a reasonable evaluation approach would involve designing reading comprehension questions based on the content. These could include multiple-choice questions, fill-in-the-blank questions, or short-answer questions to assess the annotators' understanding of the core ideas. This method could help quantify both the time required for comprehension and the accuracy of understanding.\\n\\n**A1:** We sincerely thank the reviewer for your thoughtful feedback and for proposing the use of reading comprehension questions as a method to evaluate text understanding. This approach indeed offers a systematic and quantifiable way to measure comprehension accuracy and processing time, and we acknowledge its value in certain research contexts.\\n\\nHowever, the primary focus of our study is on human preferences regarding textual aesthetics, which are based on an overall perception of the aesthetic quality during the reading process. Human evaluators are capable of identifying which text is easier to understand, and in cases where a distinction cannot be made, they assign a tie, reflecting equivalence in readability. This evaluation method aligns with existing approaches such as MT-Bench[1] and ULtraFeedback[2], which similarly rely on human judgments to assess attributes like instruction following, helpfulness, informativeness, and truthfulness. Notably, evaluating aspects such as instruction following, helpfulness, informativeness, and truthfulness is not easier than assessing readability.\\n\\nTo ensure the reliability of our evaluations, all annotators participated in standardized training to familiarize themselves with the evaluation criteria and develop a consistent approach to the task. Furthermore, the annotators are proficient in English, with expertise equivalent to that of domain experts, surpassing the capabilities of typical crowd-sourced annotators. This level of proficiency and training ensures that reviewers genuinely understand the content and provide informed and accurate judgments regarding textual aesthetics.\\n\\n\\n---\\n\\n**Reference**\\n\\n[1] Zheng, Lianmin, et al. \\\"Judging llm-as-a-judge with mt-bench and chatbot arena.\\\" NeurIPS 2023.\\n\\n[2] Cui, Ganqu, et al. \\\"Ultrafeedback: Boosting language models with scaled ai feedback\\\"\\u00a0ICML 2024.\\n\\n\\n> **Q2**: As for the example at the bottom of Figure 4 concerning the musical notation, I believe it is inappropriate to explain this issue as a matter of textual aesthetics. The musical notation output by LLaMA-3.1-8B-Instruct constitutes a formatting error. Musical notation without syllable division is akin to sentences without punctuation marks\\u2014it is a basic formatting mistake, not a matter of aesthetics.\\n\\n**A2:** We appreciate the reviewer\\u2019s observation regarding the example of musical notation at the bottom of Figure 4. While it is valid to classify the absence of syllable division in musical notation as a formatting error, we argue that such formatting issues also directly contribute to and overlap with textual aesthetics, particularly in the context of our study.\\n\\nTextual aesthetics, as defined in our work, encompass not only the visual organization but also the coherence and clarity of the text. Proper syllable division in musical notation enhances readability and logical flow, enabling users to more intuitively interpret and perform the music. This aligns with our broader framework, where the structural organization of text\\u2014including line breaks, spacing, and grouping\\u2014serves both functional and aesthetic purposes. The presence or absence of these features influences the user\\u2019s experience, blending considerations of formatting and aesthetics.\\n\\nIn this specific case, the well-structured output by LLaMA-3.1-8B-TAPO, which includes appropriate syllable division and logical grouping of notes, improves the overall usability of the musical notation. This makes it not only functionally accurate but also more visually coherent and pleasing\\u2014core aspects of textual aesthetics. Conversely, the fragmented and incoherent output by LLaMA-3.1-8B-Instruct reduces readability and aesthetic appeal, which underscores the importance of aesthetics in functional outputs like musical notation.\\n\\n----\\nWe want to express our sincere gratitude for your review. If you have any further questions or concerns, please feel free to contact us at any time. We are always available and look forward to further discussions with you. :)\\n\\nBest regards,\\n\\nAll Authors\"}", "{\"title\": \"Rebuttal by Authors (2/4)\", \"comment\": \"> **Q2: From the examples shown in Figure 4, the paper's concept of textual aesthetics seems to involve only line breaks, bold fonts, and highlighting key points, which are relatively simple and may lack long-term research value.**\\n \\n**A2:** Thank you for your observation. To address your concerns, we will clarify two key aspects: (1) the comprehensive scope of textual aesthetics, and (2) the long-term research potential of textual aesthetics for LLMs.\\n\\n**1. The Comprehensive Scope of Textual Aesthetics**\\n\\nWe would like to clarify that our concept of textual aesthetics is not limited to elements such as line breaks, bold fonts, and highlighting key points. As described in Section 3.1, our definition encompasses broader and more foundational dimensions, including clarity (ease of comprehension), layout (visual organization), uniformity (consistent formatting), and coherence (logical structure). These dimensions aim to improve not only the visual appeal of the text but also its readability and logical flow, which are critical for effective communication and understanding.\\n\\nRegarding the specific example in Figure 4, the use of line breaks, bold fonts, and other elements must be appropriate and meaningful. For instance, in the third case (bottom of Figure 4), the folk-style melody output by LLaMA-3.1-8B-Instruct lacks proper line breaks, resulting in fragmented sequences of notes where four-syllable blocks are split. This diminishes readability, visual organization, and coherence, making it harder for users to interpret and utilize the melody.\\n\\nIn contrast, the output from LLaMA-3.1-8B-TAPO uses appropriate line breaks and a neatly arranged structure. This significantly enhances clarity, layout, and logical structure, making the melody easier to read and effectively perform. Such thoughtful formatting not only aligns with the principles of textual aesthetics but also ensures the usability of the text for musical purposes.\\n\\n**2. Long-Term Research Potential of Textual Aesthetics**\\n\\nTextual aesthetics plays a foundational role in optimizing the usability and engagement of content generated by LLMs. Drawing parallels with the evolution of **image aesthetics**, we note that visual appeal has significantly influenced the adoption and success of models for image generation, with aesthetic fine-tuning contributing to more human-aligned outputs in fields like design, photography, and multimedia content creation. Similarly, **textual aesthetics** holds the potential to transform how users interact with text-based outputs from LLMs by ensuring that the content is not only accurate but also structured and presented in a way that aligns with human preferences.\\n\\nTextual aesthetics is a critical area for advancing the quality and utility of LLMs, and its long-term research value spans a range of potential directions, including but not limited to:\\n\\n1. **Enhanced User Engagement**: Aesthetic textual outputs increase engagement by making content more readable, visually appealing, and intuitive.\\n2. **Improved Data Construction Methods**: Refining methods for constructing textual aesthetics datasets is crucial for advancing the field. \\n3. **Innovative Training Techniques**: Beyond data improvements, developing new training strategies that prioritize textual aesthetics\\u2014such as reinforcement learning with aesthetic feedback or fine-tuning on multimodal datasets\\u2014can enhance an LLM's ability to produce aesthetically aligned outputs while maintaining semantic accuracy.\\n4. **Improved Evaluation Frameworks**: Enhancing evaluation frameworks to align more closely with human preferences is critical. Hybrid metrics that integrate semantic clarity, readability, and visual organization can create objective and reliable systems. Robustness checks, such as adversarial testing, will ensure these systems consistently perform across diverse applications.\"}" ] }
EZExZ5d8ES
Dynamic Mixture-of-Experts for Incremental Graph Learning
[ "Lecheng Kong", "Theodore Vasiloudis", "Seongjun Yun", "Han Xie", "Xiang song" ]
Graph incremental learning is a learning paradigm that aims to adapt models trained on previous data to continuously incremented data or tasks over time without the need for retraining on the full dataset. However, regular graph machine learning methods suffer from catastrophic forgetting when applied to incremental learning settings, where previously learned knowledge is overridden by new knowledge. Previous approaches have tried to address this by treating the previously trained model as an inseparable unit and using regularization, experience replay, and parameter isolation to maintain old behaviors while learning new knowledge. These approaches, however, do not account for the fact that not all previously acquired knowledge is equally beneficial for learning new tasks, and maintaining all previous knowledge and the latest knowledge in a single model is ineffective. Some prior patterns can be transferred to help learn new data, while others may deviate from the new data distribution and be detrimental. To address this, we propose a dynamic mixture-of-experts (DyMoE) approach for incremental learning. Specifically, a DyMoE GNN layer adds new expert networks specialized in modeling the incoming data blocks. We design a customized regularization loss that utilizes data sequence information so existing experts can maintain their ability to solve old tasks while helping the new expert learn the new data effectively. As the number of data blocks grows over time, the computational cost of the full mixture-of-experts (MoE) model increases. To address this, we introduce a sparse MoE approach, where only the top-$k$ most relevant experts make predictions, significantly reducing the computation time. Our model achieved 5.47\% relative accuracy increase compared to the best baselines on class incremental learning with minimal computation increase, showing the model's exceptional power.
[ "Graph neural networks; Incremental Learning" ]
Reject
https://openreview.net/pdf?id=EZExZ5d8ES
https://openreview.net/forum?id=EZExZ5d8ES
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sWgyvNS0Xm", "kAy4hA4cs5", "ib8hKWkGOd", "bnzaFzHGiv", "Z6jfGoeJgd", "XERuBgjm0G", "VFvgFee3hr", "SSFqCrs7L2", "RI1FgYHvUV", "OD4rPSEfJA", "KSRYYKrrQl", "I841OySbrg", "HMKrSLhBPf", "HDOsz5qa0N", "FocRG1i5e9", "9Yn5Lt9iIE", "5XzIndre0o", "5PznChhHW1", "5NVzctXEHF", "47hdNBCSXD", "2Q0e9hqshz" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732123570245, 1737523570702, 1732125445446, 1732124656309, 1732232206953, 1732155001074, 1730678327125, 1732124019729, 1730686088545, 1730179820661, 1732125086741, 1732513272129, 1734973144714, 1732124054583, 1732174468682, 1732122490216, 1732495215183, 1732231255080, 1732125711225, 1732122980111, 1730130121756 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3340/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3340/Authors" ], [ "ICLR.cc/2025/Conference/Submission3340/Authors" ], [ "ICLR.cc/2025/Conference/Submission3340/Authors" ], [ "ICLR.cc/2025/Conference/Submission3340/Reviewer_Ejrw" ], [ "ICLR.cc/2025/Conference/Submission3340/Reviewer_VebG" ], [ "ICLR.cc/2025/Conference/Submission3340/Authors" ], [ "ICLR.cc/2025/Conference/Submission3340/Reviewer_8EcB" ], [ "ICLR.cc/2025/Conference/Submission3340/Reviewer_Ejrw" ], [ "ICLR.cc/2025/Conference/Submission3340/Authors" ], [ "ICLR.cc/2025/Conference/Submission3340/Reviewer_Ejrw" ], [ "ICLR.cc/2025/Conference/Submission3340/Area_Chair_1gvw" ], [ "ICLR.cc/2025/Conference/Submission3340/Authors" ], [ "ICLR.cc/2025/Conference/Submission3340/Reviewer_6WFG" ], [ "ICLR.cc/2025/Conference/Submission3340/Authors" ], [ "ICLR.cc/2025/Conference/Submission3340/Reviewer_6WFG" ], [ "ICLR.cc/2025/Conference/Submission3340/Authors" ], [ "ICLR.cc/2025/Conference/Submission3340/Authors" ], [ "ICLR.cc/2025/Conference/Submission3340/Authors" ], [ "ICLR.cc/2025/Conference/Submission3340/Reviewer_6WFG" ] ], "structured_content_str": [ "{\"title\": \"Author response 1\", \"comment\": \"We sincerely thank you for your detailed comments to help us improve the paper, and we would like to address some of your concerns here.\\n\\n> Theorem 1 is established under the assumption that the data follow a Gaussian mixture distribution, and this assumption should be explicitly stated in the theorem to make it more precise. Can this theorem extend to data following distributions other than the Gaussian mixture distribution?\\n\\nWe agree with you and modify the theorem to be more accurate in our revision. Meanwhile, theorem 1 consists of two parts, in the first part we show that DyMoE is at least as powerful as PI in arbitrary distribution. In the second part, we show that DyMoE is more powerful than PI under the Gaussian Mixture assumption. These two parts combined show that DyMoE is generally more favorable than PI. In terms of the theorem\\u2019s generalizability, we observe that in our proof, the only Gaussian property it relies on is the Gaussian Tail bound. In the proof, we can replace the bound with the definition of a sub-gaussian probability, and the theorem will still hold (sub-gaussian already includes a large class of distributions, such as symmetric Bernoulli, symmetric triangular).\\n\\n> The details of the data balancing training procedure (line 297-301) is not very clear from the paper. Specifically, how to select the memory set for the new data block?\\n\\nFollowing many previous works [1,2], the memory node set is selected as top-K nodes whose embedding is closest to the class embedding. The K for each class is determined by the population. Let $d$ be the overall memory budget for a data block $B$, for one class $c$, $K = \\\\frac{|B_c|}{|B|}*d$.\\n\\nWe perform data balancing training after training the new data block $B$. We first obtain its memory set $S_B$, and take the union of the memory set with all previous memory sets to get the overall memory set $S$. Finally, we train the model on S for a small number of epochs (e.g. 5 epochs) with classification loss and block-guided loss.\\n\\nWe will include a detailed description in the revision.\\n\\n> The number of nodes and edges per data block is not provided in Table 8. Moreover, the paper only introduces how to split the data into different blocks in Appendix C.2. However, it is unclear how to obtain the training, validation and test sets in each block.\\n\\nFor CoraFull and Reddit, we use the split provided by the DGL package, for ArxivIIL, we use the 0.6/0.2/0.2 train/val/test split within each data block, for ArxivCIL, we use the original split provided by the OGB package. For DBLP and paper100M, we use the split provided by PI-GNN.\\n\\nFor the specific number of nodes and edges, because the datasets have up to 49 data blocks, we present them as line plots in the updated pdf.\\n\\n> It is unclear which GNN model was used in the experiments, and whether the same GNN model was applied to the baseline models.\\n\\nWe strictly follow the GNN architecture described in line 323 of the paper, which is a GIN with the proposed DyMoE module. For all other baselines, we also use GIN, if applicable.\\n\\n> Details of the evaluation metrics are missing, including how each metric is calculated.\\n\\nThe evaluation metrics are average accuracy and average precision, which are described in the preliminary section in Equation 1.\\n\\n> In line 429, the paper states that \\u201cwe can see our method significantly improves over existing baselines for both AA and AF\\u201d. However, this is inaccurate since the performance of DyMoE (57.85) is actually worse than the baseline PI-GNN (59.18) on DBLP in terms of AA. AND In line 430, the paper states that \\u201cWe reach an average of 5.47% improvement in AA and 34.64% reduction in AF\\u201d. However, it didn\\u2019t specify which model these improvements are compared to.\\n\\nWe will revise the experimental section to more accurately analyze the results. Here, we are referring to the overall performance. Because none of the baselines achieves unanimously the best performance on all datasets, the average performance improvement is computed over the best baselines for each dataset to make it a more strict and challenging evaluation for DyMoE. Specifically, we compare DyMoE with C-GNN on Corafull, arxiv, and Reddit for AA, because C-GNN is the best baseline on these datasets, and we compare with PI-GNN on DBLP as PI-GNN is the best baseline on DBLP. We then take the average change as the overall performance improvement. Under this protocol, we achieve a 5.47% average improvement in AA and a 34.64% average reduction in AF, which show that overall DyMoE is more optimal.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Author response 2\", \"comment\": \"> The effectiveness of the gating mechanism.\\n\\nOur framework is more than the MoE architecture, it contains a dynamically added MoE module with a **gating/router mechanism**, a collected memory dataset containing subsets of previously trained data, and block-guided loss to ensure old data in the memory set is still routed correctly to their corresponding experts. Specifically, we keep a small memory set of previous data blocks, when new data arrives, adding new knowledge and changing old patterns, the memory node-set is used as a proxy of the entire old data block. The detailed information of the old data block is stored within each frozen expert ensuring the knowledge is not lost, whereas we adapt the gating process through training with the memory node set to account for the overall distribution shift. \\nIn terms of unlearnability, could you be a bit more specific about the conclusion? In [2], the authors developed a catastrophic forgetting upper bound consisting of the training loss, an intra-block structural shift loss, and an inter-block structural shift loss for a **fixed** GNN. This bound only correlates to the performance of a fixed GNN under structural shift and does not suggest unlearnable results. To the best of my understanding, their proposed method shows that by properly training the model, they can minimize such a loss to improve continual learning performance. Intuitively, consider two data blocks, whose structures follow completely different distributions and are isolated graph components, which results in high inter-block structural shift. In DyMoE, the first expert is trained on one block, and its gating vector is set as the mean of the inputs. The input to the MoE module will be very different for data in the second block because of the structural shift, and the second expert learns to capture that with a gating vector very different from the first gating vectors. Thus during inference, the input that is closer to the first gating vector will have high gating values, and correctly route the input.\\n\\n> First, it is not clear what $\\\\mathcal{L}(Dy)$ and $\\\\mathcal{L}(PI)$ are exactly. Assuming the loss function is the one given by (10), the loss function would have a dependency on the $\\\\beta$ which is not reflected in the theorem statement.\\n\\nFor the theoretical analysis, we follow a clean setting, where the loss is just the cross entropy, which directly reflects the prediction accuracy. The block-guided loss $\\\\mathcal{L}_{BL}$ is omitted, because it is an auxiliary loss for regularization not directly related to the prediction accuracy, and it would be pointless to include this loss when we compare PI with our method.\\n\\n> There are many cases, where the theoretical result does not add any value. For example...\\n\\nThe proof has two parts, and the first part is easy to see, DyMoE is always at least as good as PI. To find such a parametrization, the model can simply set all gating vectors to the same one. The second part is to show that under certain conditions (Gaussian), DyMoE can achieve better loss, which shows that DyMoE is a stronger model. Note that in training, the block-guided loss will enforce inputs to be close to their gating vectors, which encourages the gating vectors to stay at the mean of the inputs.\\n\\nIn terms of separable data, consider both PI and DyMoE, models can indeed achieve zero loss in a fully supervised setting, but in the continual learning setting, when the model is presented with only the second data block, the incremental model/expert will optimize only for the second data-block, leading to increased loss for data from the first data block, because the incremental model will only output large logits for classes in the second data block. The proof in the paper shows that for any loss achieved by the PI model, DyMoE can achieve a lower one. To achieve such a lower loss, we only need the gating vector to be close to the distribution mean of their corresponding data block, which is achieved by the block-guided loss. We will add more clarity to both the theorem statement and intuition.\"}", "{\"comment\": \"Your comments are invaluable to us, and we will use them as important source to improve our work. Here, we would like to discuss some of your concerns.\\n\\n> The use of symbols is inconsistent\\n\\nThe $n$ and $m$ are input and output dimensions, we\\u2019ve clarified this in the updated pdf.\\n\\nYou are correct that $y$ is $h$, thanks for pointing this out, we\\u2019ve fixed it in the pdf.\\n\\nHere, the triangle and the vertical bar represent data blocks, not nodes. We use color and shapes to distinguish different data blocks.\\n\\n> More baselines.\\n\\nThese are definitely related works and we will include them as important reference points in our revision. Meanwhile, we compared DyMoE to RCL-CN and SSRM which have open-sourced access in the general response.\\n\\n> In line 255, ''Figure 6 shows that direct training will not result in specialized experts'' seems confusing, can you explain it more clearly?\\n\\nThis section motivates the design of block-guided loss. In Figure 6, we show two results of two models, the left one shows the results of a DyMoE model trained without block-guided loss, and the right one is trained with block-guided loss. In this experiment, after training, we isolate the experts by only activating one expert during inference to obtain their individual performance on data blocks. Every line in Figure 6 represents an expert\\u2019s isolated performance. We can see that if we train the MoE without block-guided loss, the experts are not specialized to have a good performance on their corresponding data blocks.\\n\\n> Randomization process.\\n\\nHere we follow the Sparse MoE [1]. Specifically, during training, the gating value is $g = \\\\textbf{x}\\\\cdot v + StandardNormal()\\\\ast softplus(\\\\textbf{x}\\\\cdot v_n)$, where $v$ is the gating vector and $v_n$ is a trainable noise vector determining the level of noise.\\n\\n[1] Shazeer, Noam, et al. \\\"Outrageously large neural networks: The sparsely-gated mixture-of-experts layer.\\\" arXiv preprint arXiv:1701.06538 (2017).\"}", "{\"comment\": \"Thank you very much for your prompt reply! To answer your questions,\\n\\n> This creates a circular reasoning issue, as the validation of the method is being used as its motivation.\\n\\nWe apologize if our description is not clear in the original response. A better way to explain the motivation is that: first, we want experts to specialize in corresponding data blocks so that as long as we can have correct gating vectors, we will have the correct results (low forget), and hence we have the dynamic MoE method to add and train a new expert, while freezing the rest. However, we found that in our preliminary study, we found this approach does not results in specialized experts as expected, so we further propose block-guided loss which eventually yield specialized experts.\\n\\n> How is the last expert selected?\\n\\nThe last expert is just like all previous experts, we will apply the same random tweak to its gating values. Indeed, this mechanism does not guarantee the selection of the last expert. However, always selecting the last expert is not the desired behavior. For example, if a sample (one from the memory set) needs previous experts, selecting the last expert might not be the most beneficial choice for the performance. The randomization is not designed to always select the last expert, rather it ensures that the last (and new) expert is selected with a decent likelihood, so it's properly trained.\\n\\nHope this response addresses your concerns, and we are happy to explain further.\"}", "{\"comment\": \"Thank you for your reply. For weaknesses 3 and 4, I still have two questions.\\n\\nRegarding Figure 6, I noticed that in the experimental section, the authors use it to \\\"evaluate whether their model and training procedure result in specialized experts as designed.\\\" However, the author mentioned Figure 6 \\\"motivates the design of block-guided loss\\\". This creates a circular reasoning issue, as the **validation** of the method is being used as its **motivation**.\\n\\nRegarding the randomization process, Sparse MoE states that \\\"the noise term helps with load balancing,\\\" while the authors claim that \\\"all experts have similar selection opportunities.\\\" However, in the subsequent experiments, k=3. I am curious how this random noise ensures the inclusion of the last expert, as it seems that this randomness does not guarantee the last expert will be part of the top-k selection.\"}", "{\"summary\": \"This paper proposes a dynamic mixture-of-expert (DyMoE) model for graph incremental learning. Specifically, DyMoE uses separate expert networks to model different data blocks. When a new data block arrives, it learns a new expert without modifying previously learned experts. In addition to the conventional MoE loss, DyMoE introduces a block-guided regularisation loss to correctly assign experts for different data blocks. To improve the efficiency of the DyMoE model, the paper also proposes a sparse DyMoE model, where instead of fusing the predictions from all the expert networks, it only uses the top-K most relevant experts to make predictions.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is easy to follow. The idea of using the MoE model to address graph incremental learning is novel and interesting. Experimental results on six graph incremental learning datasets demonstrate the effectiveness of the proposed DyMoE model and the block-guided regularisation loss. The results also indicate the DyMoE model can learn dedicated experts for different data blocks.\", \"weaknesses\": \"1. Theorem 1 is established under the assumption that the data follow a Gaussian mixture distribution, and this assumption should be explicitly stated in the theorem to make it more precise. Can this theorem extend to data following distributions other than the Gaussian mixture distribution?\\n\\n2. The details of the data balancing training procedure (line 297-301) is not very clear from the paper. Specifically, how to select the memory set for the new data block? Does this training procedure use both $L_{cls}$ and $L_{BL}$ loss?\\n\\n3. There are several missing details in the experimental setups that should be clarified: \\n\\n(1) The number of nodes and edges per data block is not provided in Table 8. Moreover, the paper only introduces how to split the data into different blocks in Appendix C.2. However, it is unclear how to obtain the training, validation and test sets in each block. \\n\\n(2) It is unclear which GNN model was used in the experiments, and whether the same GNN model was applied to the baseline models. \\n\\n(3) Details of the evaluation metrics are missing, including how each metric is calculated. \\n\\n4. The analysis of the results is inaccurate and unconvincing in some cases: \\n\\n(1) In line 429, the paper states that \\u201cwe can see our method significantly improves over existing baselines for both AA and AF\\u201d. However, this is inaccurate since the performance of DyMoE (57.85) is actually worse than the baseline PI-GNN (59.18) on DBLP in terms of AA. \\n\\n(2) In line 430, the paper states that \\u201cWe reach an average of 5.47% improvement in AA and 34.64% reduction in AF\\u201d. However, it didn\\u2019t specify which model these improvements are compared to. \\n\\n(3) The results in Table 1 and Table 2 indicate that the proposed DyMoE model performs worse than baselines on the DBLP and Arxiv in terms of AA. However, the paper lacks analysis to explain this result. \\n\\n(4) I also have some concerns regarding the efficiency experiments. In table 3, the paper provides the training time of DyMoE and three baselines: Finetune, ER-GNN and Retrain. In table 5, the paper only compares DyMoE and ER-GNN in terms of inference time. It is unclear why the paper only compares these three baselines instead of the more effective baseline C-GNN. Moreover, the results in table 5 show that the inference time of DyMoE is worse than ER-GNN, which indicates that the proposed DyMoE model actually cannot achieve good performance while maintaining good efficiency. \\n\\n(5) In table 4, I don\\u2019t think the comparison between Full and the other variants is fair given that all the other variants are based on the sparse model. Comparing the results of Sparse and w/o DB, w/o BL, w/o Dy, the statement that \\u201cwe see performance drop whenever a component is missing from the model, validating the importance of each component\\u201d is not very accurate as the performance of w/o Dy (92.09) is better than Sparse (91.57) on Reddit in terms of AA and the performance of w/o DB (83.09) is better than Sparse (82.93) on Paper100M in terms of AA. \\n\\n5. The effect of the hyperparameter $K$ in the sparse model is not investigated. \\n\\n6. The code is not available, which makes it difficult to reproduce the results given that a lot of experimental details are missing.\", \"questions\": \"1. Can Theorem 1 extend to data following distributions other than Guassian mixture distribution?\\n2. In the efficiency analysis, why not report the training and inference runtime of more effective baselines such as C-GNN?\\n3. The proposed DyMoE learns a new expert when a new data block arrives. This requires prior knowledge of which data belong to different blocks and a sufficient amount of data in each block to effectively learn a new expert. This assumption may present challenges in real-world incremental learning applications, where data blocks are not predefined or large enough for expert learning.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response 2\", \"comment\": \"> The results in Table 1 and Table 2 indicate that the proposed DyMoE model performs worse than baselines on the DBLP and Arxiv in terms of AA. However, the paper lacks analysis to explain this result.\\n\\nOur method still outperforms other baseline on the Arxiv-Class incremental learning scenario. For the instance incremental scenario, we note that even the most basic baseline (Online GNN) is performing very well, showing that Arxiv Instance Incremental does not carry significant distribution shift that DyMoE is good at capturing, and DyMoE uses more parameters, which makes the model prone to overfitting more.\\n\\tFor the DBLP dataset, we observe a trade-off between average accuracy and average forget. (Methods with higher accuracy get worse forget) For this dataset, it is difficult to learn new knowledge, without compromising some previously acquired knowledge. Hence we observe that DyMoE is good at maintaining a low forgetting while sacrificing some accuracy.\\n\\n> Efficiency Concern\\n\\nFor the training time, we pick these baselines for running time comparison because they are the most representative baselines. In particular, C-GNN has the same running time as ER-GNN when their memory set sizes are the same (which is the case in this paper). Hence, we only provided the running time of the most representative ones.\\n\\nFor the inference time, because finetune, ER-GNN and Retrain have the same architecture, they will have the same running time as well, which is why we provided the results for ER-GNN.\\n\\nIn the appendix, we conducted experiments when we increased the sizes of baseline models and found that their running time increased with minimal or even negative performance improvement. We present results when we make DyMoE\\u2019s parameter sizes the same as other baseline models in general response.\\n\\nWe can see that, when DyMoE and baselines have the similar number of active parameters, the inference time is similar, and DyMoE can still achieve promising results.\\n\\n> Concerns about ablation study.\\n\\nWe appreciate your detailed look into the paper, we will carefully revise this section to accurately reflect the experimental results. Meanwhile, we still observe that without data balancing, the overall average accuracy dropped by 1.8%, showing the necessity to inform the gating vectors the actual distribution of the data blocks. Without block-guided loss, the overall AA dropped by 3.8%, validating block-guided loss is necessary to train specialized experts. If the model is trained with all experts initialized at the first data block, the performance dropped by 2.2%, as this essentially degrades to a regular MoE model, which does not have a mechanism to prevent forgetting.\\n\\nFor Paper100M, we observe that the number of data samples per block increases, making more recent blocks have a higher weight in the overall accuracy. In this case, accurately representing the full distribution through DB is not as critical. For reddit, we observe that it is much denser then other datasets, requiring more parameters to learn the graph, and with more initial parameters, the w/o Dy model can model the first few data blocks better. Because reddit only has 8 data blocks, the benefit of DyMoE in choosing correct experts is not apparent.\\n\\n> Effectiveness of $K$\\n\\nWe additionally present results when we increase K. We can see that even with just one active expert, DyMoE is performing well, showing the effectiveness of the specialized experts. We also observe that K has a different impact on different data. Specifically, the benefit of more active experts saturates quite early for the arxiv dataset, while the reddit dataset continuously benefits from more active experts, which is intuitive as the gap between the full DyMoE and sparse DyMoE(K=3) is larger.\\n\\n| | Arxiv | Reddit |\\n|-----|-------|--------|\\n| k=1 | 66.53 | 88.46 |\\n| k=3 | 67.25 | 91.57 |\\n| k=5 | 68.14 | 91.98 |\\n| k=7 | 68.09 | 92.63 |\\n\\n> Code availability\\n\\nWe will release the code after the final decision.\"}", "{\"summary\": \"This paper proposes a Dynamic Mixture-of-Experts (DyMoE) approach for graph incremental learning, utilizing specialized expert networks for incoming data blocks. It introduces a customized regularization loss to help existing experts retain performance on old tasks while supporting new learning. Additionally, a sparse MoE model is developed to reduce computational costs by using only the most relevant experts for predictions.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1.\\tThis paper introduces a Dynamic Mixture-of-Expert (DyMoE) module with separate experts for each data block, allowing dynamic relevance-based information synthesis.\\n2.\\tThis paper proposes a block-guided loss function to minimize negative interference among experts, reducing catastrophic forgetting.\\n3.\\tThis paper integrates the DyMoE module into GNN layers to effectively handle data shifts in continual graph learning.\\n4.\\tThis paper develops a sparse DyMoE variant that focuses on the most relevant experts, enhancing efficiency while maintaining accuracy.\", \"weaknesses\": \"1.\\tIn real-world applications, how do the dynamic changes in graph structures and data blocks affect the model's performance? Have you considered the impact of data noise and outliers on the results?\\n2.\\tExperiments:\\n1)\\tThe MoE structure increases the number of parameters in the model, thereby enhancing its capability. In contrast, the parameter count of the baseline model is not specifically mentioned. Does this represent an unfair comparison?\\n2)\\tThis paper mentions \\\"with minimal computation increase,\\\" but it seems insufficient experimental or theoretical evidence is provided. Could there be a comparison of throughput?\\n3)\\tThe paper claims that this method can handle topological and contextual changes, but how is this topological change quantified and assessed? Understanding the extent and nature of these changes is crucial for evaluating the effectiveness of the proposed approach. There are no experiments to address this problem.\", \"questions\": \"1. How do the dynamic changes in graph structures and data blocks affect the model's performance? Have you considered the impact of data noise and outliers on the results?\\n\\n2. Enhance experiments and analysis.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a Dynamic Mixture-of-Experts (DyMoE) framework to address the challenge of catastrophic forgetting in dynamic graphs for incremental graph learning. The DyMoE framework introduces a novel approach by dynamically adding specialized expert networks for each incoming data block. These expert networks are used selectively through a gating mechanism, which determines the relevance of each expert based on the input data.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper identified the issue of existing continual learning methods that ignore the correlation between different data blocks.\\n2. The paper tackled a significant dynamic graph problem in real-world scenarios, where data arrives incrementally, offering a scalable solution without the need for full dataset retraining.\\n3. The paper developed a DyMoE module with specialized experts for each data block and introduced a data block-guided loss to reduce negative interference among the experts.\", \"weaknesses\": \"1. The use of symbols is inconsistent, and the explanations lack clarity. Specifically:\\n\\n (a) What do m and n represent in Eq. (3)? Are they referring to the feature dimension or the number of nodes? \\n\\n (b) In Eq. (13), what is the specific meaning of y? Does it convey the same meaning as h?\\n\\n (c) In Figure 1, triangles are used on the left and circles on the bottom right to represent blocks.\\n\\nThe authors can include a notation table or provide more explicit definitions for each symbol when first introduced.\\n\\n2. The baseline methods used in the experiment mainly come from older works, with only one baseline in recent 3 years. State-of-the-art approaches, such as MSCGL [1], RLC-CN [2], SEM [3], and UGCL [4], would have provided a more comprehensive evaluation. Can you justify the choice of baselines if more recent methods were intentionally excluded? \\n\\n [1] J. Cai, X. Wang, C. Guan, Y. Tang, J. Xu, B. Zhong, and W. Zhu, ''Multimodal continual graph learning with neural architecture\\n search,'' in Proceedings of the ACM Web Conference, 2022, pp.1292\\u20131300. \\n\\n [2] A. Rakaraddi, L. Siew Kei, M. Pratama, and M. De Carvalho, ''Reinforced continual learning for graphs,'' in Proceedings of the\\n 31st ACM International Conference on Information & Knowledge Management, 2022, pp. 1666\\u20131674. \\n\\n [3] Zhang, Xikun, Dongjin Song, and Dacheng Tao. ''Ricci curvature-based graph sparsification for continual graph representation \\n learning,'' IEEE Transactions on Neural Networks and Learning Systems (2023).\\n\\n [4] T. D. Hoang, D. V. Tung, D.-H. Nguyen, B.-S. Nguyen, H. H.Nguyen, and H. Le, ''Universal graph continual learning,'' Transactions \\n on Machine Learning Research, 2023.\\n\\n3. In line 255, ''Figure 6 shows that direct training will not result in specialized experts'' seems confusing, can you explain it more clearly?\\n\\n4. In Section 3.3, it\\u2019s unclear how to tweak the gating values during training randomly so all experts have similar selection chances. What is the magnitude of the tweaks, or can you give a mathematical formulation of how the randomization is applied?\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author response 1\", \"comment\": \"Thank you for your very knowledgeable and thoughtful comments, these greatly help us in making our paper better, and we would like to discuss some of your concerns here.\\n\\n> The identified problem and the proposed method are not novel.\\n\\nThe continual learning problem in the graph is unique compared to that in the CV/NLP domain. Specifically, the model not only needs to learn from new data blocks and keep the old knowledge but also needs to adapt the representation learned for old blocks according to the new graph contexts, as the new data block may change the topology of nodes from the old data blocks.\", \"there_are_two_challenges_in_graph_continual_learning\": \"1) Reducing forgetting: The model needs to remember old data blocks, which is a common challenge for other continual learning domains, like CV and NLP. 2) Adapting old nodes to new graph context: unlike CV/NLP, the nodes in old data blocks change as new data arrives, a model should account for this.\\n\\nTo solve the first challenge, we dynamically add MoE modules to learn individual data blocks to reduce forgetting. The trained modules are not modified afterward, preserving as much learned information as possible. Consider an experience-replay variant that finetune the model with the memory node set from previous data blocks, it can potentially modify all parameters of the model causing significant forgetting, whereas we only adjust the gating vectors, meaning that we can replicate the output when the gating values are correct. Lastly, to ensure the gating value correctness, we propose the block-guided loss, which is the directly supervised signal to produce correct gating values.\\n\\nThe second challenge happens because a node from data block 1 can later be connected to nodes from future data blocks. Our design interleaves the MoE modules into GNN layers, so the models learn to calibrate the input to the next layer via message-passing, without changing the experts' parameters. Intuitively, because of the distribution shift, the previously trained experts no longer work. Training like ER-GNN would significantly change the expert\\u2019s performance while heavily overfitted towards the memory nodes, whereas we only update the gating vectors to account for the distribution shift in the representative memory nodes and all information previously about the entire data block is not lost.\\n\\nWhile MoE in CV/NLP are indeed used for continual learning, they do not tackle the second challenge. For example, to perform class-incremental learning, [1] needs to train an auto-selector which depends on a specific data block, and does not evolve over time. Consequently, if we adapt this method to the graph domain, an auto-selector trained for data block one will fail quickly when new data blocks arrive, because the auto-selector no longer works for data block one as the new connection changes the distribution of the nodes in data block one.\\n\\n> What happens to the parameter complexity (overall model size) when the data stream is large\\n\\nThe overall complexity indeed increases linearly as the length of the data stream increases, which is why we implemented the sparse variant to ensure the activated parameters and inference time remains constant as the length increases.\\n\\n> How effective is this approach for parameter/information sharing among tasks (otherwise, why not train a separate network for each task)\\n\\nAn important target in our paper is a class-incremental setting, where new data blocks introduce new classes and the tasks also require the model to distinguish these new classes from the old ones. A separate model cannot perform this task, and hence an integrated model is essential. Meanwhile, to study the effectiveness of information sharing we additionally conduct task-incremental learning.\\n\\n| | Cora-full | Arxiv-IIL |\\n|-------------|-----------|-----------|\\n| Separate | 84.61 | 63.19 |\\n| ER-GNN | 83.21 | 66.53 |\\n| C-GNN | 85.94 | 67.28 |\\n| DyMoE (k=3) | 87.82 | 67.76 |\\n\\nWe compare DyMoE to the separate baseline where we train separate models for each task. We can see that without support from a larger dataset (previous data blocks), the separate model struggles to achieve optimal performance.\"}", "{\"comment\": \"Thank you to the authors for their detailed response. However, based on the responses provided by the authors, I remain unconvinced about two critical points:\\n\\n1. Validation & Motivation: The explanation regarding the motivation for block-guided loss still lacks clarity and consistency. While the authors attempted to clarify their reasoning, it remains unclear how Figure 6 can serve both as a motivation and a validation. This circular reasoning issue has not been fully addressed, and I strongly encourage the authors to explicitly separate these concepts in their manuscript to avoid further confusion.\\n\\n2. Randomization Mechanism: The authors' response regarding the random selection of experts provides some insights but lacks empirical evidence to substantiate their claims. Specifically, while they argue that all experts, including the last one, are selected with sufficient likelihood, they do not provide statistical analysis or experimental results to confirm this. Without concrete evidence, it is difficult to assess whether the mechanism achieves the desired balance or unintentionally neglects certain experts.\\n\\nGiven these unresolved issues, I maintain my original score.\"}", "{\"metareview\": \"This paper looks at the graph incremental learning problem which is basically the continual learning problem where each task involves learning a graph neural network (GNN). The paper presents a dynamic mixture of experts (MoE) approach in which a dynamic MoE GNN layer adds new expert networks dedicated to model the incoming data for a new task.\\n\\nThe paper's basic idea of using MoE is interesting; however, it is to be noted that MoE has been used in several prior works in continual learning although not for the setting where each task is a GNN.\\n\\nThe reviewers expressed several concerns, some of which include: (1) issues regarding the motivation and validation, and randomization mechanism used in expert section (Reviewer Ejrw), (2) missing SOTA baselines such as MSCGL, RLC-CN, SEM, and UGCL (Reviewer Ejrw), (3) lack of discussion/insights about the challenges in applying MoE for incremental GNN (Reviewer 6WFG), (4) lack of discussion around model size for large data stream and how the proposed approach compared with alternatives such as those that rely on parameter/information sharing among tasks (Reviewer 6WFG).\\n\\nThe authors during rebuttal/discussion tried to address some of these concerns but the concerns still lingered. In the end, no one championed the paper for acceptance.\\n\\nBased on the reviews, the authors' response, the discussion, and my own reading of the paper, I largely agree with the concerns raised by the reviewers. The authors should take into account the feedback to look for ways to improve the work and consider submitting at another venue.\", \"additional_comments_on_reviewer_discussion\": \"The authors' rebuttal was considered and discussed.\\n\\nReviewer Ejrw expressed concerns regarding the motivation and validation of the method, and about the selection of the last expert. The authors responded to the concern but the reviewer maintained that the reasoning behind the method's motivation and validation appear to be \\\"circular\\\". The reviewer's concern regarding the randomization mechanism in expert selection also remained unresolved. Due to these reasons, Reviewer Ejrw maintained the original score.\\n\\nReviewer 6WFG also expressed several concerns, such as differences from prior works in graph continual learning. The authors responded to these in detail but in the follow-up discussions, these concerns lingered.\\n\\nNo other reviewer championed the paper.\\n\\nThe decision was made after factoring in all these points.\"}", "{\"title\": \"Author Response 3\", \"comment\": \"> The proposed DyMoE learns a new expert when a new data block arrives. This requires prior ...\\n\\nCompared to other methods, the block-guided loss, which directly correlates samples to their corresponding experts, is designed to better distribute samples to the most relevant experts during inference. \\n\\nIn case of very small block sizes, it will not shift the entire distribution, and we might consider simply using the existing model for prediction or combining the new data into the latest block and retrain the last export. If we fine-tune on the small data block, most existing methods will suffer from overfitting due to the small sample size. On the other hand, updating the model with a small data block is not economical and favorable in production. People usually control the intervals between model refreshes to balance the model performance and the cost, and they typically update their models at regular intervals, such as daily or weekly, to accumulate sufficient numbers of samples.\\n\\nHowever, we agree with you that the uneven block sizes present unique challenges to continual learning, and is a valuable setting to explore and present experimental results where the sizes of data blocks in a data stream are uneven.\\n\\n[1] Zhou, Fan, and Chengtai Cao. \\\"Overcoming catastrophic forgetting in graph neural networks with experience replay.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 5. 2021.\\n\\n[2] Kim, Seoyoon, Seongjun Yun, and Jaewoo Kang. \\\"DyGRAIN: An Incremental Learning Framework for Dynamic Graphs.\\\" IJCAI. 2022.\"}", "{\"comment\": \"Thank you for your diligent responses. Unfortunately, I still found most of my main concerns unresolved. Please allow me to summarize/rephrase them as follows.\", \"main_concerns\": \"1. one of the claimed contributions of the paper \\\"identified the issue of existing continual learning methods that ignore the correlation between different data blocks\\\". Please explain what the new insights presented in the paper and how they are different from the previous works on graph continual learning [1]. \\n\\n2. the integration between studied problem, methods and theoretical results\\n \\na. I completely agree that modifications have been added to the MoE model. I do not see how the proposed design is tailored or motivated by the unique challenges faced by GCL. Can you explain the connection between the proposed design and the dependency/correlation of data in GCL?\\n\\nb. For the theoretical comparison between MoE and PI. Is cross-entropy alone a typical learning objective/setting for the PI frameworks? \\n\\nc. It is known that MoE in general are better at learning the mixture of distribution. So what is the novel insight brought by the theoretical analysis that is tailored to the proposed setting?\", \"additional_comment\": \"1. I do not think the incremental class setting is a valid/complete argument, as one can train a binary classification model for each class.\\n\\n2. the argument for the separable data is also not valid/complete, as there can be cases where the first data block and second block live in orthogonal subspace and the learning process does not interfere. \\n\\n3. the unlearnability results mentioned are a straightforward corollary from the result of [1] and the classic result in transfer learning [2]. \\n\\n[1] \\\"Towards robust graph incremental learning on evolving graphs.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[2] \\\"A survey on domain adaptation theory: learning bounds and theoretical guarantees.\\\" arXiv preprint arXiv:2004.11829 (2020).\"}", "{\"title\": \"General Response\", \"comment\": \"We would like to thank all reviewers for their constructive feedback and for acknowledging some of our contributions. In particular,\\n- The identified problem is important and interesting. (Reviewer Ejrw, 6WFG)\\n- The proposed method \\u201ceffectively handles data shifts\\u201d through block-guided loss and DyMoE model. (Reviewer 8EcB, VebG)\\n- The method is empirically verified with experiments and shows strong performance. The paper also carefully examined the design to show that DyMoE indeed results in specialized experts to reduce forgetting. (Reviewer 8EcB, VebG, Ejrw)\\n\\nThe reviewers also express some concerns about the proposed methods, here we address some of the common concerns and leave the rest to individual responses to the reviewers.\\n\\n> The continual learning problem involves trade-off among inference time, parameter size and performance, how does DyMoE respond to this?\\n\\nHere we present experimental results comparing DyMoE with other baselines varying active parameter sizes.\\n\\n| | Active Params | Reddit | Time | Arxiv-CIL | Time |\\n|------------|---------------|--------|-------|-----------|-------|\\n| ER-GNN | 26M | 81.35 | 18.41 | 57.09 | 11.07 |\\n| C-GNN | 26M | 86.75 | 18.37 | 63.65 | 10.94 |\\n| DyMoE(k=3) | 28M | 90.06 | 17.61 | 66.09 | 11.02 |\\n| ER-GNN | 80M | 80.69 | 19.89 | 59.44 | 14.61 |\\n| C-GNN | 80M | 86.3 | 20.01 | 62.18 | 14.78 |\\n| DyMoE(k=3) | 77M | 91.57 | 20.55 | 67.25 | 13.65 |\\n\\nWe can see that when models have similar parameter sizes, the inference time of DyMoE is comparable to other baselines, and still achieves better performance.\\n\\n> Comparing DyMoE with more recent baselines.\\n\\nWe compared DyMoE with two more recent baselines RLC-CN and SSRM-GIN.\\n\\n| | Paper100 AA | Paper100 AF | CoraFull AA | CoraFull AF | Reddit AA | Reddit AF |\\n|----------|-------------|-------------|-------------|-------------|-----------|-----------|\\n| RLC-CN | 76.53 | -4.61 | 72.94 | -9.8 | 82.39 | -5.52 |\\n| SSRM-GIN | 81.6 | -3.8 | 82.49 | -6.37 | 88.45 | -6.14 |\\n| DyMoE | 82.93 | -3.31 | 81.33 | -5.69 | 91.57 | -3.46 |\\n\\nWe can see that DyMoE\\u2019s performance is still strong compared to more recent baselines. While SSRM-GIN outperforms DyMoE on CoraFull AA, we notice that SSRM is regularization approaching, optimizing the model to fight structural shift through an auxiliary loss. We can potentially incorporate this into DyMoE to further improve performance. \\n\\n\\n[1] ''Reinforced continual learning for graphs,'' in Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2022, pp. 1666\\u20131674.\\n\\n[2] \\\"Towards robust graph incremental learning on evolving graphs.\\\" International Conference on Machine Learning. PMLR, 2023.\"}", "{\"comment\": \"I thank the author for the diligent response. However, I do not think I have got direct/sufficient answers to my previous concerns/questions. As such, I will keep the original score.\"}", "{\"comment\": \"Thank you very much for the prompt reply. To further clarify your main concerns,\\n\\n> What's the distinction between DyMoE and SSRM[1].\\n\\nSSRM [1] is indeed an exciting and solid work, and it addresses a closely related problem. The key distinction between SSRM and our work is how the structural shift is captured. In SSRM, it is captured implicitly through a regularization loss that quantifies the distribution difference between the original and updated neighborhood. In DyMoE, the structural shift is explicitly captured by interleaving MoE into the GNNs. Consider the case when a node in previous data blocks is connected to new nodes in new data blocks as shown in Figure 3 of the paper, the input to the k-th DyMoE GNN layer is the output of the (k-1)-th layer. If we use the old experts in (k-1)-th layer to process the new nodes and feed the output to k-th layer, the model will fail because the old experts were never trained on the new data, and consequently forget. Instead, we train a new expert in (k-1)-th layer, whose representation can help predict old nodes under connections to new data, mitigating its negative impact.\\n\\n> Why the design is necessary for GCL problems.\\n\\nUnlike in [2] (vision domain), where one data point belongs to one data block, in GCL, one data point can contain data from multiple data blocks because we consider a node and all of its neighbors. Consequently, in the graph domain, multiple/potentially unrelated experts need to be activated to learn one node. The MoE and interleaving design here explicitly captured this, all neighbor nodes for a target node are processed by their own corresponding experts, the experts are aligned to the representation that helps the prediction of the target node.\\n\\n> For the theoretical comparison between MoE and PI. Is cross-entropy alone a typical learning objective/setting for the PI frameworks?\\n\\nPI does use other auxiliary loss during training, but the only meaningful loss during evaluation is the cross-entropy loss, and we are comparing that between PI and DyMoE.\\n\\n> ... So what is the novel insight brought by the theoretical analysis that is tailored to the proposed setting?\\n\\nThe proof here is tailored to the continual learning context, which shows concretely that DyMoE, an MoE-based continual learning method, could achieve lower loss than PI. Here I try to use a more intuitive way to explain its implication. For two data blocks, PI is first trained to achieve zero loss on the first data block and then trained to achieve zero loss on the second data block without information from the first data block. This process will introduce loss to the first data block, and we prove that DyMoE can mitigate this loss.\\n\\nWe hope these responses can lift your concerns. We still greatly appreciate your time evaluating our paper and your very knowledgeable comments to help improve our work.\\n\\n[1] \\\"Towards robust graph incremental learning on evolving graphs.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[2] \\\"Boosting continual learning of vision-language models via mixture-of-experts adapters.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\"}", "{\"title\": \"Author response 3\", \"comment\": \"> The theorem statement tells us that it is possible for the proposed method to achieve a smaller...\\n\\nAs we discussed in your previous concern, the loss is just the classification loss (cross-entropy) between the predicted value and the true label, as can be seen in equation (20), we will state this more clearly in the theorem. So here the loss is independent of the model and directly reflects the accuracy (if a model has low cross-entropy loss, it assigns high probability to correct classes, and hence higher accuracy). Optimizing the cross-entropy loss will optimize the performance. The theorem proves that, in arbitrary circumstances, DyMoE can achieve the same classification loss, which is a direct indicator of the classification accuracy, as PI. And, in the case of a Mixture of Gaussian, DyMoE can achieve lower classification loss, leading to higher accuracy.\\n\\n> Outdated baselines.\\n\\nWe compare DyMoE with more recent baselines, including RCL-CN[3] and SSRM[2] in general response. DyMoE is still strong compared to these baselines.\\n\\n> Inconsistent performance.\\n\\nThe lower performance is because we follow a different split than the CGLB[4]. We follow the split in the PI-GNN[5] paper.\\n\\n[1] \\\"Boosting continual learning of vision-language models via mixture-of-experts adapters.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[2] \\\"Towards robust graph incremental learning on evolving graphs.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[3] ''Reinforced continual learning for graphs,'' in Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2022, pp. 1666\\u20131674.\\n\\n[4] \\\"Cglb: Benchmark tasks for continual graph learning.\\\" Advances in Neural Information Processing Systems 35 (2022): 13006-13021.\\n\\n[5] \\\"Continual learning on dynamic graphs via parameter isolation.\\\" Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2023.\"}", "{\"comment\": \"We greatly appreciate your positive comments and critical feedback on our work, we address your concerns as follows.\\n\\n> In real-world applications, how do the dynamic changes in graph structures and data blocks affect the model's performance? Have you considered the impact of data noise and outliers on the results?\\n\\nThat\\u2019s a great question. It is indeed more challenging when the data are noisy and dynamic. We can compare the performance of the Pretrain baseline (we only train the model using the first data block and run inference on the following data blocks with this model) with other existing baselines, and we can see a large performance gap because the distribution of the data significantly shifted. More interestingly, we observe that when the graph changes more rapidly, such as in arxiv where the volume of publications significantly increases over the past few decades, the gap is large, and the gap is smaller when the graph is more consistent, such as in Elliptic where the data only span roughly 2 years. To cope with this challenge, we propose block-guided loss to stabilize training. For example, when an outlier falls in the distribution boundary of two data blocks, it would be difficult for the MoE model alone to determine the most relevant expert. However, in the case of continual learning, every data point is accompanied by a block index, and hence we can use that information to provide direct supervision to reduce the risk of assigning wrong experts to noisy/outlier data points.\\n\\n> The MoE structure increases the number of parameters in the model, thereby enhancing its capability. In contrast, the parameter count of the baseline model is not specifically mentioned. Does this represent an unfair comparison?\\n\\nIt is indeed important to evaluate the models under the same parameter budget, and we present results in general response and show that the fixed-sized parameters model experience performance degradation when they are given the same parameter budget as DyMoE, mostly due to the overfitting.\\n\\n> This paper mentions \\\"with minimal computation increase,\\\" but it seems insufficient experimental or theoretical evidence is provided. Could there be a comparison of throughput?\\n\\nCompared to the complexity/running time lower bound finetune on the cora-full dataset, we observe that the training time increases by 7%, whereas we observe a 109% performance increase. Compared to the ER method, the training time increase is 4%, whereas the performance increase is 14.4%. This shows that DyMoE achieves higher performance in an efficient way.\\n\\nFrom a theoretical standpoint, the complexity of Training DyMoE is $O(nkT)$, where $n$ is the number of samples, $k$ is the number of active experts, and $T$ is the complexity of a single expert, the complexity of online GNN is $O(nT)$, because k is usually a small constant, the running time of DyMoE is comparable to the lower bound online GNN. On the contrary, the retraining method has a complexity of $O(ntT)$, where $t$ is the number of data blocks, and $t$ is usually large ($t > 10$), leading to higher computational costs.\\n\\n>The paper claims that this method can handle topological and contextual changes, but how is this topological change quantified and assessed? Understanding the extent and nature of these changes is crucial for evaluating the effectiveness of the proposed approach. There are no experiments to address this problem.\\n\\nNote that there is no golden rule/single metric to determine and quantify the distribution shift. Moreover, not all structural changes impact the labeling process. Hence, we did not include a distribution shift in the submission. However, we agree it will be more tangible to include some structural changes across data blocks to understand the problem and proposed method, and hence provide the change of several graph properties to understand the evolution of the data blocks. Because most datasets have over 10 data blocks, we present the overall standard deviation and mean of the graph properties for clarity. Specifically, we compute two important graph properties, the average number of triangles per node and the graph density, for each data block, and we compute the standard deviation and mean of these properties across data blocks, to show the change in graph structure.\\n\\n| | Ave # triangles (Mean) | Ave # triangles (std) | Density (Mean) | Density(std) |\\n|----------|------------------------|-----------------------|----------------|--------------|\\n| arxiv | 14.45 | 12.95 | 6.33E-05 | 2.54E-05 |\\n| coraFull | 5.69 | 1.21 | 0.000427 | 0.000469 |\\n\\nFrom the results we can see that both properties have large standard deviations, validating the necessity of accounting for the structural shift.\"}", "{\"summary\": \"The paper presents a Dynamic Mixture-of-Experts (DyMoE) framework for incremental graph learning, aiming to address catastrophic forgetting in graph neural networks (GNNs) when new data arrives sequentially. Unlike traditional approaches that treat prior knowledge uniformly, DyMoE dynamically adds specialized expert networks to handle each new data block, optimizing the reuse of relevant information. To reduce computational demands, the authors introduce a sparse DyMoE variant that selectively activates only the top-k experts for predictions. DyMoE achieves a notable improvement in accuracy with minimal computational overhead, making it an effective solution for class and instance incremental learning settings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"the studied problem is important and interesting. It deserves more attention\", \"the proposed method and identified problem, despite not being very novel, is sound to some extend.\"], \"weaknesses\": \"- the identified problem and the proposed method are not novel.\\n\\n a. the correlation induced by the dependency of graph data was previously introduced (e.g., see [1,2]), rendering continual learning on graph data is different from other i.i.d data \\n\\n b. applying the MoE model to continual learning seems to be a common combination (e.g., see [3,4]). I certainly agree with the fact that in different settings (especially under graph data), such a combination might require different considerations and tailored designs. However, in the current version of the paper, I fail to understand what are the unique problems for applying such a combination in the studied setting in the paper.\\n\\n- the effectiveness of the method needs further support\\n\\n a. I agree with the intuition that a separate expert network can better store/memorize information for each task. However, it comes with two natural questions: 1) what happens to the parameter complexity (overall model size) when the data stream is large and 2) how effective is this approach for parameter/information sharing among tasks (otherwise, why not train a separate network for each task)\\n\\n b. It is not clear why the gating mechanism in MoE is effective in addressing catastrophic forgetting. In the graph data case, as shown in [1,2], the correlation/dependence among data can induce a distribution shift and can render the problem unlearnable. Why the (proposed) gating mechanism can address this? Furthermore, even in the case of i.i.d data, it is still not clear to me why the gating mechanism does not suffer from catastrophic forgetting (e.g., forget how to router the decision for the previous task). \\n\\n- the theoretical result is not rigorous and clear, and it is not connected well to support the proposed method\\n\\n a. First, it is not clear what $\\\\mathcal{L}(Dy)$ and $\\\\mathcal{L}(PI)$ are exactly. Assuming the loss function is the one given by (10), the loss function would have a dependency on the $\\\\beta$ which is not reflected in the theorem statement.\\n\\n b. There are many cases, where the theoretical result does not add any value. For example, in the case of separable data, it can be shown that with enough model parameters, there exists a model parameter that can achieve zero loss. In this case, the other side of the result is also true, i.e., $\\\\mathcal{L}(PI) \\\\leq \\\\mathcal{L}(Dy)$. Furthermore, the result is about existence. It does not tell us much about how difficult to find such a parameter.\\n\\n c. The theorem statement tells us that it is possible for the proposed method to achieve a smaller loss value (with respect to the proposed objective) compared to the parameter isolation. It needs further argument why the proposed training objective is the best/very reasonable for measuring model performance. what happens if the objective is switched to the one used in $\\\\mathcal{L}(PI)$ paper? How does this training loss connect with the generalization performance (what we actually care)?\\n\\n- the experimental studies have some questions as well\\n\\n a. (minor) The selected baselines seem to be a bit outdated \\n\\n b. (major) The performance of the baseline method seems to be much lower than the one presented in the benchmark study[5]. What are the reasons behind this?\\n\\n[1] \\\"Towards robust graph incremental learning on evolving graphs.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[2] \\\"Continual Learning on Graphs: A Survey.\\\" arXiv preprint arXiv:2402.06330 (2024).\\n\\n[3] \\\"Theory on Mixture-of-Experts in Continual Learning.\\\" arXiv preprint arXiv:2406.16437 (2024)\\n\\n[4] \\\"Boosting continual learning of vision-language models via mixture-of-experts adapters.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[5] \\\"Cglb: Benchmark tasks for continual graph learning.\\\" Advances in Neural Information Processing Systems 35 (2022): 13006-13021.\", \"questions\": \"see weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
EXsiGFkwV6
Realistic-Gesture: Co-Speech Gesture Video Generation through Semantic-aware Gesture Representation
[ "Pinxin Liu", "Pengfei Zhang", "Hyeongwoo Kim", "Pablo Garrido", "Ari Shapiro", "Kyle Olszewski" ]
Co-speech gesture generation is crucial for creating lifelike avatars and enhancing human-computer interactions by synchronizing gestures with speech in computer vision. Despite recent advancements, existing methods often struggle with accurately aligning gesture motions with speech signals and achieving pixel-level realism. To address these challenges, we introduce Realistic-Gesture, a groundbreaking framework that transforms co-speech gesture video generation through three innovative components: (1) a speech-aware gesture tokenization that incorporate speech context into motion pattern representation, (2) a mask gesture generator that learns to map audio signals to gestures by predicting masked motion tokens, enabling bidirectional contextually relevant gesture synthesis and editing, and (3) a structure-aware refinement module that employs differentiable edge connection to link gesture keypoints to improve video generation. Our extensive experiments demonstrate that Realistic-Gesture not only produces highly realistic and speech-aligned gesture videos but also supports long-sequence generation and video gesture editing applications.
[ "gesture generation; motion representation; video generation" ]
Reject
https://openreview.net/pdf?id=EXsiGFkwV6
https://openreview.net/forum?id=EXsiGFkwV6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tIo60ao9Gn", "pRDN8TmLMZ", "jHJL2ovS6k", "iookgFnheK", "aXC0qBCWkf", "a4kbL95oMZ", "Zw5r0gdZqN", "SOt3JD4Grj", "QYL7FaQ1f1", "PDImPnaDHA", "NVAYWZvvuj", "NHnwCYNLSk", "K1gXuVZSPY", "Hbyb0T6z4D", "EtIo8CLLWp", "CDAKbpsWDw", "BJx8ObdBhg", "99BhUsgwn2", "6vsUbLdEZ7", "26SgEfnPPY", "1ud7Y0Jyxh" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1731592863462, 1730625779591, 1732244991768, 1731587837980, 1732517848030, 1732106429593, 1731926250357, 1732352640409, 1730009802695, 1732351969834, 1730627856290, 1734595670967, 1731854567295, 1731928526657, 1730522067052, 1737523494250, 1732502267683, 1732869184383, 1731927981926, 1730640203888, 1731927459216 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2259/Authors" ], [ "ICLR.cc/2025/Conference/Submission2259/Reviewer_LNUW" ], [ "ICLR.cc/2025/Conference/Submission2259/Reviewer_zy2t" ], [ "ICLR.cc/2025/Conference/Submission2259/Authors" ], [ "ICLR.cc/2025/Conference/Submission2259/Reviewer_P3xu" ], [ "ICLR.cc/2025/Conference/Submission2259/Authors" ], [ "ICLR.cc/2025/Conference/Submission2259/Authors" ], [ "ICLR.cc/2025/Conference/Submission2259/Authors" ], [ "ICLR.cc/2025/Conference/Submission2259/Reviewer_XVKk" ], [ "ICLR.cc/2025/Conference/Submission2259/Authors" ], [ "ICLR.cc/2025/Conference/Submission2259/Reviewer_P3xu" ], [ "ICLR.cc/2025/Conference/Submission2259/Area_Chair_NPQQ" ], [ "ICLR.cc/2025/Conference/Submission2259/Reviewer_XVKk" ], [ "ICLR.cc/2025/Conference/Submission2259/Authors" ], [ "ICLR.cc/2025/Conference/Submission2259/Reviewer_zy2t" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2259/Reviewer_LNUW" ], [ "ICLR.cc/2025/Conference/Submission2259/Reviewer_bDof" ], [ "ICLR.cc/2025/Conference/Submission2259/Authors" ], [ "ICLR.cc/2025/Conference/Submission2259/Reviewer_bDof" ], [ "ICLR.cc/2025/Conference/Submission2259/Authors" ] ], "structured_content_str": [ "{\"comment\": \"### Response to Concerns About Novelty in Core Components:\\n\\nWe appreciate the reviewer\\u2019s feedback and recognize that the **VQ + Masking-based Representation Learning** for gesture generation may seem similar to existing work. However, the primary novelty of our method lies in **contextual distillation through contrastive alignment** for learning speech triggers that drive gesture patterns, as discussed in Section 4.1. To the best of our knowledge, no prior work has explored the **benefits of audio-motion representation learning for constructing contextualized motion representations to achieve better gesture generation**.\\n\\nTo further highlight the contribution of our approach, we plan to add additional experiments [here](https://anonymousaisubmission.github.io/) comparing our method with other approaches focused solely on gesture pose generation (without video synthesis), including:\\n\\n1. **EMAGE**: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture Modeling \\n2. **TalkSHOW**: Generating Holistic 3D Human Motion from Speech \\n3. **Rhythmic Gesticulator**: Rhythm-Aware Co-Speech Gesture Synthesis with Hierarchical Neural Embeddings (as suggested by Reviewer P3xu)\\n\\nThese works utilize VQ+masking-based models or contrastive domain alignment, making them relevant baselines for comparison. We will conduct these experiments using the **BEAT-X dataset** for a fair comparison. For consistency, we will exclude the image-animation component from our method and extend gesture representation from 2D to 3D poses, incorporating SMPL-X expressions for facial gestures, as done in the literature.\\n\\n---\\n\\n### Response to Pose Alignment:\\n\\nWe apologize for any confusion caused by our previous explanation. To address variations in human skeletal structure, the pose representation for a sequence is calculated as the difference in position (\\u0394x, \\u0394y) relative to the coordinates of the starting frame, combined with the original position of that frame. This is represented as [\\u0394x, \\u0394y, x\\u2081, y\\u2081]. This formulation ensures that pose alignment is achieved since all subsequent frames\\u2019 motions are conditioned on the starting frame\\u2019s pose.\\n\\n---\\n\\n### Response to Detailed Comparison:\", \"regarding_speech_gesture_alignment\": \"In works like [1] and [2], contrastive learning is employed to align audio and gesture modalities, initializing the audio encoder for gesture generation. However, our method differs in the following key ways:\\n\\n1. **Contextualized Motion Representation**: We directly encode alignment information into the motion representation using an RQ-codebook. This allows the semantics and contextual triggers from speech (e.g., pronouns like \\u201cthis\\u201d or \\u201cthey\\u201d) to be fused into the motion embedding. This enables the generator to easily identify the corresponding motion representation in response to speech triggers during generation. In contrast, [1] and [2] lack such a strategy for creating more contextualized motion representations.\\n\\n Our ablation study in Table 3 (b), (c) shows that, with modality alignment alone, we achieve an FGD score of around 21. However, applying contextual distillation through contrastive learning further refines the motion representation, leading to better contextual-aware generation.\\n\\n2. **Temporal Alignment**: In addition to high-level, global sequence alignment, we also address **temporal alignment** by applying temporal masking and audio beat classification. None of the other works consider this intricate temporal alignment between speech and gestures.\\n\\nFor **VQ + Masking-based Generation**:\\nWorks such as [4], [5], and [6] explore VQ-quantization with autoregressive, or diffusion-based generation techniques. While these methods do utilize VQ-based frameworks, we believe our contribution in the generator design is unique in that we apply a **Iterative re-masking-based approach specifically to co-speech gesture generation**. \\n\\nGenerating gestures as long sequences presents a challenge for autoregressive or diffusion models, as they are often too slow for real-time generation. In contrast, our approach using masking-based generation can efficiently generate gestures in just **5 steps**, with further steps potentially degrading quality by disrupting the temporal relationships between the modalities (different from text-motion domain where more step masking will be more beneficial in [5). Our extensive experiments in Tab. (e) and (f) in generator design and generation procedures is non-trival but presenting results potentially be helpful for future.\\n\\nAdditionally, while [3] shares the same domain (co-speech gesture generation), it does not present overlapping or similar novelties to our work.\"}", "{\"summary\": \"The paper proposes Realistic-Gesture, a gesture video generation model with speech audio and the first video frame inputs. The proposed method first learns a joint embedding of the speech audio and gesture motions for speech-gesture alignment with CLIP-like contrastive learning. The pose features are 2D face and body landmarks from MMPose. The speech-audio features are the concatenation of WavLM, Mel spectrogram, and beat detections. The gesture motions are tokenized using Residual VQ (RVQ) with the distillation to minimize the cosine similarity with the motion encoder output in the gesture-audio joint embedding. Inspired by VALL-E, the method uses the Masked Gesture Generator for RVQ's base layer and the Residual Gesture Generator for residual layers. The Masked Gesture Generator is a transformer with the cross-attention between the audio embedding and the gesture embedding with AdaIN after the feed-forward layer to condition the model with the speaker identity. The Masked Gesture Generator is trained with randomly masked tokens. The Residual Gesture Generator is similar but consists of embedding layers corresponding to the RVQ residual layers. During training, one residual layer is randomly selected. The inference iteratively predicts the mask probabilities conditioned by the audio embedding to remask the token with the lowest probability for the next iteration. Finally, the source image is warped with TPS [Zhao and Zhang 2022] with the edge maps from the generated keypoints, followed by the image refinement GAN conditioned by the same edge maps, to generate the gesture videos.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Speech-gesture alignment with CLIP-like joint embedding with contrastive learning\", \"Motion tokenization with RVQ distilled by the joint embedding above\", \"Audio-conditioned Masked Motion Model with dedicated residual generators for the RVQ residual layers\"], \"weaknesses\": \"The contributions boil down to the motion representation and the motion generation model. There is not much to the image generation module. The visual quality may also be suffering because of the image generation module (see below).\\n\\nQualitatively, the generated videos from this method generally look better than other methods in comparison (ANGIE, MM-Diffusion, S2G-Diffusion), especially if I focus on the speech-gesture motions. However, there are moments where the human generation is creepier than others with this method, e.g. in noah1.mp4, the head and the body look disconnected. I am not too surprised that the TPS warping will introduce unnatural human poses, especially with larger 3D motions. Perhaps the image warper and the refiner need more work. The motion representation and the generation model perhaps could be hooked up to some other conditional image/video generator.\\n\\nThe writing is somewhat unclear on the image generation module. The main text directly jumps to the image refinement module without mentioning the TPS image warping, which confused me a bit. Perhaps 4.3 should be titled differently and recap the image warping module while clarifying this is not the paper's contribution (no strong opinion here).\\n\\nI understand that the space is very limited but I would like more elaborations on the ablations, especially on the motion representation and the generator design, so I can be confident that the main contributions of this paper are indeed effective. See my question.\", \"questions\": \"Can authors provide videos for the ablation study, especially on the motion representation and the generator designs? I want to be confident that the use of the CLIP-like joint embedding to distill the RVQ for the Masked Motion Model to generate tokens is helping to improve the visual quality.\\n\\nThe speech-gesture joint embedding could perhaps be used to evaluate the alignment of the speech audio and gestures, like the CAPP model from VASA-1 [Xu et al. 2024].\", \"flag_for_ethics_review\": \"['Yes, Legal compliance (e.g., GDPR, copyright, terms of use)']\", \"details_of_ethics_concerns\": \"The dataset seems to be scraped off from YouTube. While the metadata shared in PATS [Ginosar et al. 2019] may be licensed under \\u201cCC BY - NC - ND 4.0 International,\\u201d I do not think this means the videos linked by the metadata have the same license.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I have raised my score from 5 to 6, as the author has addressed most of my concerns. Can the author further provide quantitative indicators to evaluate the method's performance in generating long sequences compared to EMAGE, Talkshow and Rhythmic Gesticulator? The quality difference between the methods presented in the one-minute supplementary video is not significant.\"}", "{\"comment\": \"Thanks for your valuable suggestions and comments to this work. We address some of the technical details as follows:\\n\\n### Reply to Masking Strategy and Clarification:\\nThank you for pointing this out. To clarify, for an 80-frame video, we randomly select 24 continuous frames to mask during training. We conducted a preliminary experiment to assess the effect of random masking by comparing it with a setting that did not use any masking. The results are summarized in the table below. We include this \\\"no mask\\\" setting in the manuscript, in addition to the two original settings described in the Appendix.\\n\\n### Speech-Face retrieval vs. Face-Speech retrieval\\n| **Setting** | **R@1 \\u2191** | **R@2 \\u2191** | **R@3 \\u2191** | **R@5 \\u2191** | **R@10 \\u2191** | **R@1 \\u2191** | **R@2 \\u2191** | **R@3 \\u2191** | **R@5 \\u2191** | **R@10 \\u2191** |\\n|----------------------|------------|------------|------------|------------|------------|------------|------------|------------|------------|------------|\\n| **(a) All** | 0.181 | 0.350 | 0.485 | 0.722 | 1.343 | 0.226 | 0.361 | 0.429 | 0.677 | 1.207 |\\n| **(a) w/o mask** | 0.142 | 0.326 | 0.388 | 0.656 | 1.112 | 0.158 | 0.299 | 0.343 | 0.612 | 1.026 |\\n| **(b) Small batches**| 26.230 | 45.318 | 59.330 | 77.019 | 89.858 | 24.977 | 44.822 | 59.894 | 77.775 | 90.264 |\\n| **(b) w/o mask** | 25.373 | 44.221 | 60.432 | 78.141 | 88.232 | 24.534 | 44.532 | 59.121 | 74.232 | 87.675 |\\n\\n### Speech-Body retrieval vs. Body-Speech retrieval\\n| **Setting** | **R@1 \\u2191** | **R@2 \\u2191** | **R@3 \\u2191** | **R@5 \\u2191** | **R@10 \\u2191** | **R@1 \\u2191** | **R@2 \\u2191** | **R@3 \\u2191** | **R@5 \\u2191** | **R@10 \\u2191** |\\n|----------------------|------------|------------|------------|------------|------------|------------|------------|------------|------------|------------|\\n| **(a) All** | 0.102 | 0.237 | 0.327 | 0.587 | 1.230 | 0.158 | 0.271 | 0.406 | 0.654 | 1.320 |\\n| **(a) w/o mask** | 0.112 | 0.143 | 0.303 | 0.494 | 1.023 | 0.144 | 0.253 | 0.384 | 0.599 | 1.187 |\\n| **(b) Small batches**| 25.542 | 43.660 | 57.954 | 77.471 | 90.309 | 24.052 | 43.874 | 58.495 | 76.986 | 89.745 |\\n| **(b) w/o mask** | 23.437 | 40.653 | 54.332 | 74.983 | 88.273 | 22.454 | 40.235 | 56.383 | 74.436 | 88.675 |\\n\\n\\nFrom these results, we conclude that global contrastive learning can be enhanced by incorporating simple random masking, which helps improve the temporal alignment between audio and gesture features.\\n\\n---\\n\\n### Reply to Technical Details (Sequence Length and Audio2Pose Generation):\\nFor both contrastive learning and the audio2pose generation tasks, we use 80-frame videos, which correspond to approximately 3.2 seconds of video. This information is also availble in Appendix, Implementation-Detail Section.\\n\\n---\\n\\n### Reply to Model Efficiency (Resolution and GPU Memory):\\nThank you for raising this point. We\\u2019ve added additional details in the table below, along with a comparison to the resource consumption of other models. For reference, we provide a link to our anonymous webpage demonstrating the comparison: [here](https://anonymousaisubmission.github.io/). The visual comparisons with Animate Anyone will be uploaded by next two days.\\n\\n### Resource Consumption Comparison with Stable-Diffusion-based Image-Animation Models (1 NVIDIA A100 GPU), Inference for 80-frame Videos\\n\\n* denotes our re-implementation on the PATS dataset.\\n\\n| **Methods** | **Training \\u2193** | **Batch Size** | **Resolution** | **Memory \\u2193** | **Training Task** | **Inference \\u2191** |\\n|-------------------|----------------|----------------|----------------|--------------|-------------------|------------------|\\n| AnimateAnyone* | 10 days | 4 | 512 | 40 GB | Pose-2-Img | - |\\n| AnimateAnyone* | 5 days | 2 | 512 | 42 GB | Img-2-Vid (24 frames) | 15s |\\n| **Ours** | 2.5 days | 64 | 256 | 64 GB | Img-Warp | \\u2264 1s |\\n| **Ours** | 1 day | 64 | 256 | 48 GB | Img-Refine | \\u2264 1s |\\n| **Ours** | 3.5 days | 32 | 512 | 60 GB | Img-Warp | \\u2264 1s |\\n| **Ours** | 1 day | 32 | 512 | 40 GB | Img-Refine | \\u2264 1s |\\n\\n---\"}", "{\"title\": \"Response to authors\", \"comment\": \"Dear authors,\\n\\nThanks for your efforts to address my concerns.\\n\\nI have read the comments of all the reviewers. While I still have some concerns about the metrics in FGD and FVD, I will maintain my score of 5 in the final rating.\\n\\nBest\\n\\nReviewer P3xu\"}", "{\"comment\": \"We would like to thank all reviewers for their invaluable comments and feedback. We have replied to each reviewer individually to address their questions and concerns, and have updated the paper to include extra results, figures, and demo videos. We would like to highlight and answer some concerns raised in common:\\n\\n---\\n\\n### Novelty and Contribution Explanation\\nTo present the contribution of this work over existing literature, **1)** We are the first work to explore benfits of audio-motion representation learning for constructing contextualized motion representations for gesture generation **2)** We do extensive experiments to validate the masking based model design with fast inference speed of **5** steps, with various downstream application support; **3)** We propose a structure-ware image refinement method to resolve the ambiguity by image-warping for pixel-level avatar animation. With these three, we are capably of achieving a joint co-speech gesture motion generation and photo-realistic avatar video animation within one unified framework.\\n\\n---\\n\\n### Comparison with gesture generation only methods\\nWe have included the comparison between our method with **EMAGE**, **TalkShow**, and **Rhythmic Gesticulator**, in the new supplementary document Tab.2 and corresponding demo videos beat-x-comparison.mp4 in the revised demo page . It shows better performance over existing SOTA methods. We in addition present the ablation videos ab-1.mp4, ab-2.mp4, ab-3.mp4 for PATS dataset and also beat-x-comparion2.mp4 and beat-x-comparion3.mp4 for BEAT-X dataset, demonstrating the benefits of contextual alignment and distillation for motion representation learning's benefits in our framework.\\n\\n---\\n\\n### Comparison with avatar video generation only method\\nAs suggested by Reviewer bDof, we also introduces the visual comparions with AnimateAnyone, which indicate the Diffusion-based model lacks of capability of disentangle camera motion and human motions in the videos, leading to significant background temporal jittering problem.\\n\\n ---\\n\\nWe hope our answers and updated paper can help to address the concerns raised in the initial reviews. Since we received highly mixed comments from the reviewers, we would highly appreciate active discussions from the reviewers and we are happy to clarify any further questions.\"}", "{\"comment\": \"### Reply to Missing Citations and Comparisons:\\nThank you for pointing out the missing related works. We have now included two relevant studies in the related work section: one on gesture generation and the other in conditional video generation. **While both are relevant works for gesture and video generation separately, neither provides a unified framework like ours.** We apologize for initially overlooking them and have added references to these works in the new revision (Line 102 and Line 120, respectively).\\n\\nFor **gesture generation comparison**, we have included the additional experiments in the revised supplementary videos and defer more about the detailed comparions to gesture-generation-only models including **Rhythmic Gesticulator** in response to Reviewer LNUW and Reviewer P3xu.\\n\\nRegarding **video synthesis**, the **Make-Your-Anchor** (Stable-Diffusion-based methods) requires **complex** preprocessing with 3D tracking to obtain SMPL-X, and **does not provide fast and computational-friendly way** as we do for video avatar animation. In addition, without the released training code or data preprocessing details before ICLR submission and right now, we were unable to replicate it. Therefore, due to its similarity with **AnimateAnyone**, we have opted to use it as a replacement and defer the time and resource comparison, in response to Reviewer XVKk. For visual synthesis quality comparison, we provide **addition expriments in the supplementary material and demo videos**. Diffusion-based models **lack of background motion control** and present significantly background temporal jittering problem. \\n\\n---\\n\\n### Reply to Gesture Editing:\\nWe acknowledge that the explanation of gesture editing was insufficient and may have caused confusion. To clarify, the gesture editing process includes two main types:\\n\\n1. Editing the intermediate segment of a video to transfer the style of one speaker to another (as shown in the original demo).\\n2. Replacing a small portion of audio in the original video with new audio and rerendering the video. (as shown in the revised demo video)\\n\\n---\\n\\n### Reply to Distorted Gesture Pattern Transfer:\\nThank you for pointing out the distortion in some of the \\\"Gesture Pattern Transfer\\\" videos. We recognize that these issues arise due to mismatched body proportions between source and target characters, and are limited by the available data on speaker identities. We hope that future work of scaling up the training will help address this issue.\\n\\n---\\n\\nWe hope our explanations can provide clear presentations of our work and address your concern.\"}", "{\"comment\": \"Thanks for your additional comments. For the table we provided in the revised supplementary material, our measure is conducted on long-sequence generation.\\n\\n\\n| Method | FGD (\\u2193) | BC (\\u2191) | Diversity (\\u2191) | MSE (\\u2193) | LVD (\\u2193) |\\n|--------------------------------|---------|---------|---------------|------------|-----------|\\n| Rhythmic Gesticulator [AO et al. 2022] | 6.453 | 6.558 | 9.132 | - | - |\\n| TalkSHOW [Yi et al. 2022] | 6.209 | 6.947 | 13.47 | 7.791 | 7.771 |\\n| EMAGE [Liu et al. 2023] | 5.512 | **7.724**| 13.06 | 7.680 | 7.556 |\\n| Ours (w/o Distillation) | 7.479 | 7.395 | 12.12 | 7.656 | 7.671 |\\n| **Ours** | **4.650**| 7.370 | **13.55** | **7.343** | **7.432** |\\n\\nTo prevent the confusion regarding the evaluation setting, for the short sequence generation setting on BEAT-X. We cut the original testing dataset audios into each segment of 256 frames (about 8.53 seconds) for conducting this evaluation. For the long-sequence generation, we use the raw sequence length of audio from the testing dataset for generation using a sliding window method in the main paper.\"}", "{\"summary\": \"This paper improves two-stage co-speech video gesture generation method in two main aspects. Two-stage here is auido2pose and pose2video stages. The authors:\\n\\n**1. Improve audio2pose stage by contrastive pretraining.**\\n\\n- The baseline started from a mask represenation learning and Residual VQ-VAE, refer to MoMask [Guo et al. 2024]. The authors modify it to fuse audio feature by cross-attention. \\n- The authors pretrain gesture encoders, and audio encoders using a audio2pose contrastive learning. Then, the audio encoder in audio2pose generation stage is initialized by pretrained audio encoder. The gesture encoder, in generation stage is distilled to keep the similarity to pretrained gesture encoder.\\n\\n**2. improve pose2video stage by learned edge-map for image warping.** \\n\\n- The baseline is a pixel-level image-warping based pipeline. Using Thin Plate Splines (TPS) as mentioned in Appendix. G. The image was initially warpped and then refined by network.\\n- The authors propose a thickness learnable edge heatmap. And using this heatmap to improve the warping results. \\n\\nThe authors present experiments to show the overall results outperform previous methods, and have the ablation studies for demonstrating each improvement is effective.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposals in this paper are correct and have insights:\\n\\n**1. Improve audio2pose stage by contrastive pretraining.**\\n\\nThe concept of improving the audio2pose generation via contrastive learning pretraining is correct and from the results, it improves the audio2pose generation significantly. In image generation field, researchers use pretrained text-image CLIP encoder to improve the results. this paper shares the same insight in audio2pose domain. \\n\\n**2. improve pose2video stage by learned edge-map for image warping.**\\n\\nThis is a valuable improvement to pixcel-level image warping based approach. Firstly, most of current methods may focus on the improvement of latent-diffusion based methods, but they are typically slow in real-world applications, and have noisy background which require further post-processing. The pixcel-level approach, is very valuable for improvement due to the clean background and faster inference speed. Second, it is correct the most important problem is the correctness of \\\"flow\\\" for warpping based approach, and the thickness of flow is the most important factor. using a learned approach to solve this avoid a lot of manually parameter adjustments.\\n\\nThe experiments, in particular for the ablation part, show the effectiveness of each proposed module.\", \"weaknesses\": \"I have a few unclear implementation details.\\n\\n1. Random mask for contrastive learning training. in Line 211-212. \\\"we random mask 30% ...\\\" I'm confused the mask is in a continues short-sentence level or frame-level. and why it could improve the low-level similarity learning. I suggest writing more explainations here.\", \"questions\": \"The questions will influence my score is:\\n\\n1. Random mask for contrastive learning training. in Line 211-212. \\\"we random mask 30% ...\\\" I'm confused the mask is in a continues short-sentence level or frame-level. and why it could improve the low-level similarity learning. I suggest writing more explainations here.\\n\\n2. What is the sequence length used for training the contrastive learning and audio2pose generation?\\n\\n3. Similarly, what is the resolution for the image training and evaluation stage, will the image-warping based approach suffer a GPU memory issue when resolution increase?\", \"the_question_will_not_influence_my_score_is\": \"1. some discussion about early stage audio2pose + pose2video work like speech2gestures [Ginoar et al. 2019] and Speech Driven Template [Qian et al. 2021]. in related work section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"### Reply to Metric Comparison:\\nThank you for your thoughtful comments on our metric comparison. We acknowledge that the FVD score shows a smaller difference (476.120 vs. 486.134) compared to the FGD score (1.303 vs. 23.646). However, we believe this is due to the different sensitivities of the two metrics. FVD is primarily focused on pixel-level differences, which do not fully capture the quality of motion patterns or keypoint locations. On the other hand, FGD is more sensitive to these factors, providing a more strict measure of the naturalness of the generated motions.\\n\\nIn our ablation study (Tab. (d)), we show that without image refinement, the FVD score is **492.341**, but when image refinement is applied, it drops to **476.120**, highlighting the effectiveness of our **Structure-Aware Image Refinement**. Furthermore, other visual quality metrics, such as VQA$_T$ and VQA$_A$, also reflect significant improvements in image quality due to this module. For visual comparisons, our supplementary videos provide clear evidence of the substantial quality difference, with our method rendering more natural and coherent hand movements and reducing issues like blurry shoulders and unnatural motion.\\n\\nWe hope these clarifications address your concerns and demonstrate the strength of our approach.\\n\\n---\\n\\nWe appreciate your thoughtful feedback and hope that our revised manuscript, along with the additional clarifications, more clearly communicates the novelty and effectiveness of our approach.\"}", "{\"summary\": \"This work proposes Realistic-Gesture, a framework that enhances this process through speech-aware gesture tokenization, a mask gesture generator for audio-to-gesture mapping, and a structure-aware refinement module. The results demonstrate that Realistic-Gesture creates realistic, speech-aligned gesture videos and supports long-sequence generation and editing.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This paper proposes a framework for generating co-speech body-gesture videos, dubbed Realistic-Gesture. Realistic-Gesture integrates a speech-aware gesture motion representation, a masked gesture motion generator, and a pixel-level refinement module to facilitate high-quality video generation. The strengths of this work are summarized as follows:\\n1. The insight of the work is interesting and significantly practical in real-life applications.\\n2. The experimental workload is solid and persuasive.\\n3. The demo videos are helpful for reviewers to have a comprehensive understanding of this work.\", \"weaknesses\": \"However, I still find some weaknesses and some main questions in this manuscript:\\n\\n1. In the introduction section, the authors put much effort into introducing existing works. This may lead to much overlap with the ``related work'' section. The motivation and insight of this work could be summarized more compactly. Authors can leverage more space to elaborate on the high-level technical contribution of their work.\\n\\n2. The teaser figure (Fig. 1) seems confused. I cannot obtain significant insight into it. Meanwhile, there are no effective captions in the manuscript.\\n\\n3. There are some typos, e.g., in line 173, To achieve this goal, We... The letter ``W'' should be lowercase.\\n\\n4. The technical novelty seems a little bit weak. The Residual Gesture Generator, Masked Gesture Generator are very similar to the previous works, like:\\n\\n[1] Generating Holistic 3D Human Motion from Speech, in CVPR 2024.\\n\\n[2] EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture Modeling, in CVPR 2024.\\n\\n[3] BEAT A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis, in ECCV 2022.\\n\\n5. I am very curious that the FGD and FVD in Experimental Table 1 seem a bit weird. While the authors' method achieves a larger marginal improvement in FGD than the suboptimal S2G (1.303 vs 23.646). The FVG of these two methods is very propinquity (476.120 vs 486.134). Does this mean that the author's proposed ``STRUCTURE-AWARE IMAGE REFINEMENT'' is not effective?\\n\\n6. The proposed ``learnable edge heatmaps'' is similar to the work named [4]. I suggest the authors compare with it.\\n\\n[4] Audiodriven neural gesture reenactment with video motion graphs, in CVPR 2022.\", \"questions\": \"Please refer to Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper received mixed reviews and received an overall borderline rating. Reviewers have pointed out the issues of the generated videos in the supplementary materials, and the core components are not sufficiently novel. AC also checked the generated videos, and agreed with the reviewers that the generated video quality is subpar. For instance, the hands are quite blurry in the generated results of the supplementary materials. Reviewers P3xu, zy2t and bDof, all raised the concerns that the core components of these modules closely resemble previous studies. Based on the inferior generation results and insufficient novelty, AC made the final decision.\", \"additional_comments_on_reviewer_discussion\": \"The most and critical issues for this work are the generation quality and novelty of the proposed components. Reviewers P3xu, zy2t and bDof all raised the similar concerns in the initial reviews. The authors provided feedback on the raised concerns and some concerns of the reviewers have been effectively addressed. However, reviewer bDof pointed out two works which are highly related to the proposed components and shared concerns among reviewers. In addressing the novelty, the authors emphasize the novelty lies in the unified framework, but the final results are not superior. AC agreed with reviewers that the long-sequence generation quality is indeed not superior. Therefore, the contributions of the proposed components seem not significant.\"}", "{\"title\": \"Thank you for the feedback\", \"comment\": \"Thank you for the detailed feedback, the experiments with object scores addressed my concerns about the usage of random mask, training length and image resolution,\\n\\nfrom the additional experiments, I think it is valuable that showing mask representation learning could further improve the performance of contrastive learning. That may inspire further work to combine these two.\\n\\nAnd the inference speed comparison demonstrates that this non-diffusion-based model is more practice for real-world applications such as real-time generation or streaming systems. Even though a lot of work is based on diffusion, I think the exploration of the non-diffusion models still be valuable for these online applications. so I raised my score from 6 to 8.\"}", "{\"comment\": \"### Reply to Long Sequence Generation Quality:\\nThank you for pointing this out. We will provide additional statistics to measure the performance degradation during long sequence generation and editingin the revised supplementary material or can be found [here](https://anonymousaisubmission.github.io/). During our additional study of gesture generation on BEAT-X dataset, we found out long sequence generation, unlike for PATS dataset, does not affect the generation quality much. We suppose this might be due to PATS dataset are sources with mostly less than 10-second videos.\\n\\n---\\n\\n### Reply to Detailed Comparisons:\\nWe in addition provide the experiments for BEAT-X dataset in the revised supplementary material with visual comparisons in demo page videos. We discover adding contextual distillation can improve the generation quality and achieve better quality than any existing works. Please see the revised supplementary material for further details.\\n\\n---\\n\\nWe hope our explanations and additional demo videos can provide clear presentations of our work and address your concern.\"}", "{\"summary\": \"The paper introduces a framework aimed at generating realistic co-speech gesture videos. This framework tackles the challenges of establishing correspondences between speech signals and body movements, inferring suitable gestures from speech samples, and rendering the target speaker performing these gestures in a lifelike manner. The authors propose three innovative components to accomplish this:\\n1. A speech-aware gesture representation that aligns facial and body gestures with speech semantics, enabling fine-grained control.\\n2. A mask gesture generator that maps audio signals to gestures by predicting masked motion tokens, thereby enabling bidirectional and contextually relevant gesture synthesis and editing.\\n3. A structure-aware refinement module that uses a multilevel, differentiable edge connection to link gesture keypoints for the generation of detailed videos.\\n\\nThe paper's contributions include the production of highly realistic, speech-aligned gesture videos, as well as support for long-sequence generation and video gesture editing applications. The experiments demonstrate the method's superiority over existing approaches in both quantitative and qualitative metrics.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper presents a well-articulated approach to generating co-speech gesture videos, demonstrating a clear understanding of the associated challenges and the necessity for a sophisticated method to address them.\", \"The authors have crafted a compelling narrative surrounding the design of their method, offering adequate motivation for each component.\", \"From a visual standpoint, the proposed method surpasses existing state-of-the-art techniques.\", \"The integration of speech-aware gesture representation, masked gesture generation, and structure-aware refinement is a significant strength, enhancing both the realism and controllability of gesture synthesis.\"], \"weaknesses\": \"**Long Sequence Generation Quality**: The author identifies the capacity to generate long sequences as a primary contribution of their method. However, the supplementary video material reveals that the quality of long sequence generation is subpar; in later stages, the hands become blurry, and behavioral patterns tend to repeat. Furthermore, the author provides a limited number of videos and fails to present quantifiable metrics for assessing the quality of long sequence generation. Consequently, I have significant doubts regarding the framework's ability to generate long sequences effectively.\\n\\n**Lack of Novelty in Core Components**: Although the framework offers an integrated approach to gesture synthesis, its core components\\u2014speech-aware gesture representation and masked gesture generation\\u2014have been examined in various forms in recent literature.\\n- The contrastive learning approach for speech-gesture alignment has been utilized in gesture synthesis methods [1,2,3] and is very similar to the CSMP introduced in [1].\\n- Similarly, the use of RVQ [4,5] and masked gesture synthesis techniques [6] is not new to the field.\\n\\nThe paper should offer a more detailed comparison with these methods and present a more comprehensive review of the related works.\\n\\n[1] Deichler, Anna, et al. \\\"Diffusion-based co-speech gesture generation using joint text and audio representation.\\\" Proceedings of the 25th International Conference on Multimodal Interaction. 2023.\\n\\n[2] Liu, Xian, et al. \\\"Learning hierarchical cross-modal association for co-speech gesture generation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\\n\\n[3] Xu, Zunnan, et al. \\\"Chain of generation: Multi-modal gesture synthesis via cascaded conditional control.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 6. 2024.\\n\\n[4] Zhang, Zeyi, et al. \\\"Semantic Gesticulator: Semantics-Aware Co-Speech Gesture Synthesis.\\\" ACM Transactions on Graphics (TOG) 43.4 (2024): 1-17.\\n\\n[5] Wang, Congyi. \\\"T2M-HiFiGPT: Generating High Quality Human Motion from Textual Descriptions with Residual Discrete Representations.\\\" arXiv preprint arXiv:2312.10628 (2023).\\n\\n[6] Mao, Xiaofeng, et al. \\\"MDT-A2G: Exploring Masked Diffusion Transformers for Co-Speech Gesture Generation.\\\" Proceedings of the 32nd ACM International Conference on Multimedia. 2024.\", \"questions\": [\"This method utilizes 2D keypoints as guiding conditions. However, variations in human skeletal structure and facial shape may result in distortions of the generated portrait identity. Has alignment been performed in this context? If so, what techniques are employed for alignment?\", \"What is the quality of generation for long sequences? Can the author provide quantitative indicators to support their findings?\", \"Can the author provide a more detailed description of the differences compared to the literature referenced in the Weakness? If the author can offer sufficient explanations to enhance the novelty of the paper, I will consider increasing the score.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"The rebuttal videos on ablations are insightful. The alignment makes a more drastic and clear difference in the final result than the distillation. The difference from the distillation is very subtle and whether the difference is an improvement is subjective. Looking at these ablation videos, I am surprised that the FGD with distillation improves significantly in Table 3 (b). Do the authors have an explanation for this?\\n\\nThe ablation of BEAT-X and comparison with AnimateAnyone show the advantages of the method.\\n\\nWhile I am in support of accepting this paper, I do think the paper may want to tone down on the need for distillation, based on the ablation videos. I will keep my score as is.\"}", "{\"title\": \"Response to authors\", \"comment\": \"Dear Authors,\\n\\nThank you for your response and the additional experiments. I have also reviewed the comments provided by the other reviewers. After carefully considering your response, I still have concerns regarding the following aspects:\\n\\n1. While the authors claim to present a unified framework for co-speech gesture video synthesis distinct from prior methods, the use of 2D keypoints as intermediate representations between the co-speech gesture synthesis module and the video synthesis module does not appear to be a novel approach.\\n\\n2. The gesture editing capabilities described by the authors can be achieved by combining any co-speech gesture synthesis model with a co-speech gesture video generation model. This suggests that these capabilities are not unique to the proposed method.\\n\\nOverall, this work enhances the co-speech gesture synthesis module and the gesture video synthesis module. However, as noted by Reviewer P3xu and Reviewer zy2t, the core components of these modules closely resemble those in previous studies. Therefore, I will maintain my initial score for the final rating.\"}", "{\"comment\": \"### Reply to Speech-Gesture Ablation Videos:\\nThank you for your suggestion. We have provided a webpage with additional ablation studies for your reference: [here](https://anonymousaisubmission.github.io/) or in the revised supplementary material. We additionally included the ablation of distillation as you mentioned for BEAT-X to prevent entanglement of video rendering component affecting your judgement. Additional ablation videos will be updated later as well.\\n\\n---\\n\\n### Reply to Separate Component Measurement:\\nWe appreciate your feedback. In response to similar requests from other reviewers, we have decided to separate and more clearly measure the contributions of our work in the following way:\\n\\n1. **Gesture Representation and Generation**: We have provided ablation studies based on the BEAT-X dataset to avoid potential unfair comparisons when reimplementing on the PATS dataset.\\n2. **Conditional Video Generation**: In addition, we include the most recent diffusion-based models, **\\\"AnimateAnyone\\\"**, as mentioned in the response to Reviewer P3xu.\\n\\nWe have summarized our experiments and findings in response to Reviewer zy2t.\\n\\n---\\n\\n### Reply to Speech-Gesture Alignment:\\nThank you for your insightful suggestion. We agree that the Speech-Gesture Alignment model could be used for evaluating the alignment of speech and gestures, similar to the CAPP model from VASA-1. As outlined in Appendix, the model can achieve high recall for speech-gesture retrieval when trained on a set of 32 speech-gesture pairs. Maybe further works can incorporate this component for better co-speech gesture generation evaluation.\\n\\n---\\n\\n### Reply to Confusion of Abrupt Transition to Image Animation:\\nThank you for pointing this out. We have revised the manuscript to address the abrupt transition to the image refinement module. Specifically, we have clarified the role of the **TPS image warping** and revised Section 4.3 to better explain the image warping process and emphasize that this is not a central contribution of our paper. Please see the updated section in Line.294-297.\\n\\n---\\n\\nWe hope our explanations and additional demo videos can provide clear presentations of our work and address your concern.\"}", "{\"summary\": \"The paper addresses challenges in co-speech gesture video generation through a proposed method called Realistic-Gesture. This method incorporates three main components: a speech-aware gesture motion representation, a masked gesture motion generator, and a pixel-level refinement module. Experimental results show that the proposed approach generates realistic co-speech gesture videos, while also enabling long-sequence generation and video editing capabilities.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The authors identify three key challenges in co-speech gesture video generation and propose solutions to address each one.\\n\\n2. Amount of ablation studies are conducted to verify the effectiveness of the proposed modules.\", \"weaknesses\": \"Generally, some challenges identified in this paper have been addressed in prior research; however, the authors failed to cite these studies or compare their findings with the experiments conducted in this work. Notable examples include:\\n\\n1. **\\\"Rhythmic Gesticulator: Rhythm-Aware Co-Speech Gesture Synthesis with Hierarchical Neural Embeddings.\\\"** This study proposed a shared hierarchical embedding for both speech content and motion, which closely resembles the approach taken by the authors.\\n \\n2. **\\\"Make-Your-Anchor: A Diffusion-based 2D Avatar Generation Framework.\\\"** This research adopted SMPL-X parameters as motion representations for co-speech video generation, enhancing the clarity of hand movements.\", \"additional_weaknesses_include\": \"1. While the proposed method can perform gesture inpainting, the authors inaccurately claim that it supports video gesture editing. This assertion is misleading, as the inpainted gestures are not controllable.\\n\\n2. In some of the \\\"Gesture Pattern Transfer\\\" videos, the character appears distorted, likely due to differences in body proportions between the source and target characters.\", \"questions\": \"See weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"### Reply to Paper Flow Delivery Issue:\\nThank you for your constructive feedback. In response, we have revised the manuscript to address your concern by reducing the space dedicated to related works in the introduction and refocusing the section to more clearly highlight the technical contributions of our work. We hope that this revision better emphasizes the motivation and key insights behind our approach. Please refer to the updated manuscript for further details.\\n\\n---\\n\\n### Reply to Writing Issue:\\nWe appreciate your careful review and have made the necessary revisions to the manuscript. Specifically, we have included the audio transcript and corrected the typo. We trust that these changes improve both the clarity and readability of the text.\\n\\n---\\n\\n### Reply to Similar Method [4]:\\nThank you for your suggestion. While we acknowledge the similarity with [4], we believe our work offers a significant distinction. The graph in [4] focuses on a subsequence of motion, while our method leverages an edge heatmap that captures the connectivity of joints within a single pose. Moreover, due to the closed-source nature of [4], we were unable to reimplement it for a direct comparison. We hope this clarification resolves the concern.\\n\\n---\\n\\n### Reply to Technical Novelty Concern:\\nThank you for raising the concern regarding the technical novelty of our approach. We understand that **VQ + Masking-representation Learning** is not new across various domains, but we would like to emphasize the unique contributions of our work.\\nIn contrast to previous works like [1], [2], and [3], which rely on conventional VQ-based motion tokens, we introduce **Contextualized Motion Representation** as the major innovation. It lies in **contextual distillation via contrastive alignment for learning speech triggers** for gesture patterns (as described in Section 4.1). To our knowledge, this is the first attempt to explore the benefits of **audio-motion representation learning** for constructing **contextualized motion representations for gesture generation**. Our approach directly integrates alignment information into the motion representation using an **RQ-codebook**, which allows us to fuse contextual triggers from speech (such as pronouns like \\\"this\\\" or \\\"they\\\") into the motion embedding. This enables the generator to accurately map speech triggers to corresponding motion representations during generation. \\n\\nFurther distinguishing our approach, we propose an **Iterative Re-masking-Based Generator Design**, which is optimized for **co-speech gesture generation**. Unlike autoregressive or diffusion models, which are often slow and not suitable for real-time generation of long sequences, our method can generate gestures in just **5 steps**. We demonstrate that additional steps beyond this can actually degrade quality by disrupting the temporal coherence between modalities (a challenge that does not arise in text-to-motion domains).\\n\\nOur extensive experiments, particularly those in Table 3 (b) and (c), demonstrate that applying modality alignment alone yields an FGD score of around 21. However, incorporating contextual distillation through contrastive learning significantly improves performance, resulting in better contextual awareness during generation.\\n\\nWe believe our **generator design and iterative re-masking approach** present significant advantages in terms of both speed and quality, and the experiments in Tab. (e) and (f) in the main paper clearly highlight these benefits.\\n\\nTo provide further clarity and quantitative comparisons, we include additional experiments and analysis based on a single speaker (Scott) in the revised supplementary material, along with a response to Reviewer LNUW. Demo videos showcasing our approach are also available [here](https://anonymousaisubmission.github.io/) and is included in the updated supplementary material. We also provide a quantitative comparison of long-sequence generation on BEAT-X in the supplementary materials, which we believe underscores the advantages of our method.\\n\\n| Method | FGD (\\u2193) | BC (\\u2191) | Diversity (\\u2191) | MSE (\\u2193) | LVD (\\u2193) |\\n|---------------------------------|---------|---------|---------------|------------|-----------|\\n| Rhythmic Gesticulator [AO et al. 2022] | 6.453 | 6.558 | 9.132 | - | - |\\n| TalkSHOW [Yi et al. 2022] | 6.209 | 6.947 | 13.47 | 7.791 | 7.771 |\\n| EMAGE [Liu et al. 2023] | 5.512 | **7.724**| 13.06 | 7.680 | 7.556 |\\n| Ours (w/o Distillation) | 7.479 | 7.395 | 12.12 | 7.656 | 7.671 |\\n| **Ours** | **4.650**| 7.370 | **13.55** | **7.343** | **7.432** |\"}" ] }
EXnDAXyVxw
QT-DoG: Quantization-Aware Training for Domain Generalization
[ "Saqib Javed", "Hieu Le", "Mathieu Salzmann" ]
Domain Generalization (DG) aims to train models that perform well not only on the training (source) domains but also on novel, unseen target data distributions. A key challenge in DG is preventing overfitting to source domains, which can be mitigated by finding flatter minima in the loss landscape. In this work, we propose Quantization-aware Training for Domain Generalization (QT-DoG) and demonstrate that weight quantization effectively leads to flatter minima in the loss landscape, thereby enhancing domain generalization. Unlike traditional quantization methods focused on model compression, QT-DoG exploits quantization as an implicit regularizer by inducing noise in model weights, guiding the optimization process toward flatter minima that are less sensitive to perturbations and overfitting. We provide both an analytical perspective and empirical evidence demonstrating that quantization inherently encourages flatter minima, leading to better generalization across domains. Moreover, with the benefit of reducing the model size through quantization, we demonstrate that an ensemble of multiple quantized models further yields superior accuracy than the state-of-the-art DG approaches with no computational or memory overheads. Our extensive experiments demonstrate that QT-DoG generalizes across various datasets, architectures, and quantization algorithms, and can be combined with other DG methods, establishing its versatility and robustness.
[ "Domain Generalization", "Quantization", "Ensemble", "Network Compression", "Flat Minima", "Regularization" ]
Reject
https://openreview.net/pdf?id=EXnDAXyVxw
https://openreview.net/forum?id=EXnDAXyVxw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zBICk9WF9t", "wV9ItGBvmi", "wNTfOsrDWD", "trCRJ1PJvv", "r7rwaUltTd", "qXzm9n1PQX", "pnqbAYirdY", "ncuCZKzraE", "kHh0MpO1Mz", "jgc619Rnu8", "iYBOswpvBl", "iG3vOSlTpy", "eDoq4snt56", "cc3xRgy5yp", "P4Iumqh36T", "L0nvBnB5IK", "K6ElwEOBqj", "J1AE7hXsVh", "HhnMMM6XNE", "FYhFUcsAqX", "4U2CY56lhr" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733153719066, 1732557620512, 1730722709850, 1730131797613, 1732645687623, 1732794105420, 1732714915088, 1732623864309, 1732645741653, 1732213849453, 1730723359029, 1732646566036, 1732518883957, 1732213748948, 1733153654372, 1730712607641, 1734647733732, 1737523648022, 1732218220714, 1732214183801, 1732218208066 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4565/Authors" ], [ "ICLR.cc/2025/Conference/Submission4565/Authors" ], [ "ICLR.cc/2025/Conference/Submission4565/Reviewer_NxQN" ], [ "ICLR.cc/2025/Conference/Submission4565/Reviewer_Rbwh" ], [ "ICLR.cc/2025/Conference/Submission4565/Authors" ], [ "ICLR.cc/2025/Conference/Submission4565/Authors" ], [ "ICLR.cc/2025/Conference/Submission4565/Reviewer_Rbwh" ], [ "ICLR.cc/2025/Conference/Submission4565/Reviewer_SqJY" ], [ "ICLR.cc/2025/Conference/Submission4565/Authors" ], [ "ICLR.cc/2025/Conference/Submission4565/Authors" ], [ "ICLR.cc/2025/Conference/Submission4565/Reviewer_Wsn5" ], [ "ICLR.cc/2025/Conference/Submission4565/Authors" ], [ "ICLR.cc/2025/Conference/Submission4565/Reviewer_Wsn5" ], [ "ICLR.cc/2025/Conference/Submission4565/Authors" ], [ "ICLR.cc/2025/Conference/Submission4565/Authors" ], [ "ICLR.cc/2025/Conference/Submission4565/Reviewer_SqJY" ], [ "ICLR.cc/2025/Conference/Submission4565/Area_Chair_KkSt" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4565/Authors" ], [ "ICLR.cc/2025/Conference/Submission4565/Authors" ], [ "ICLR.cc/2025/Conference/Submission4565/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thank you!\", \"comment\": \"Dear reviewer,\\nAs the discussion period nears its end, we kindly hope to receive your feedback today. We have carefully highlighted all revisions in red and would greatly appreciate your kind consideration before finalizing your recommendation.\\n\\nWe greatly value your input and hope the updates meet your expectations.\"}", "{\"title\": \"Thank you!\", \"comment\": \"We sincerely appreciate your acknowledgment and recommendations. If there are any additional questions or concerns that we can address to further assist you in evaluating our work and potentially raising your rating, we would be happy to provide detailed responses.\"}", "{\"summary\": \"This paper investigates the impact of quantization noise on out-of-distribution (OOD) generalization. The authors observe that quantization noise appears to enhance OOD generalization capabilities in domain generalization tasks. While the observation is interesting, the methods employed for quantization and ensemble are standard and lack novelty.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper is well-organized in general, and the observation that quantization noise can improve OOD generalization is interesting.\", \"weaknesses\": \"1. The methods for quantization and ensemble are quite standard and not specific to domain generalization, providing limited novelty.\\n2. The theoretical justification linking flat minima to improved OOD generalization is weakened by recent research suggesting that the connection between flatness and generalization is questionable [1,2].\\n3. Quantization noise is similar to uniform weight noise, but a systematic comparison with the latter as well as other weight perturbation schemes is missing.\\n\\n[1] Andriushchenko, Maksym, et al. \\\"A modern look at the relationship between sharpness and generalization.\\\" arXiv preprint arXiv:2302.07011 (2023).\\n[2] Mueller, Maximilian, et al. \\\"Normalization layers are all that sharpness-aware minimization needs.\\\" Advances in Neural Information Processing Systems 36 (2024).\", \"questions\": \"It would be interesting to see how different weight perturbation schemes can affect OOD generalization.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes using quantization during model training as a strategy to enhance domain generalization. The authors demonstrate that the baseline ERM achieves competitive results when quantization is applied as an implicit regularizer, with quantization-induced noise guiding the model toward flatter minima in the loss landscape. Additionally, they introduce combining quantized models into a model soup, to further boost DG performance. Results on the DomainBed benchmark indicate that this approach performs comparably to state-of-the-art DG methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written, experimental results seem comparable against state-of-the-arts.\\n\\n2. The design of using quantized models seem to be a good fit in weight ensembling, which has been proved effective for improving generalization.\", \"weaknesses\": \"1. The primary issue is the limited contribution. Their main idea is to replace the standard training in ERM with the existing quantized technique. That is hardly a new practice, since quantization itself is originally applied in ERM (given ERM is the basis of all training tasks), and they do not inroduce a DG-specific quantization technique. Additionally, the ensemble of qunatitized ERM is also an simple extension of the original model-soup. These can be called a trick for improving DG, but shouldn't be listed as a main contribution.\\n\\n2. The overclaim is another problem. It is claimed that the theoretical connection between quantization and flatter minima is provided. To begin with, I cannot find a seriours definition for the flatter minima of a model in the manuscript. Moreover, even with the informal definition in Eq. (5), they do not provide any theoretical evidence to support that $\\\\mathcal{F}\\\\_{\\\\gamma}(W^q) < \\\\mathcal{F}\\\\_{\\\\gamma}(W)$. At last, I doubt it can be theoretically proved, given there are many quantization methods with different settings, it's hard to conclude a general framework to be applied in all.\\n\\n3. According to Tab. 2, it seems different quantization methods can significantly affect the in-domain and out-of-domain accuracy, some even worse than the baseline ERM. Does this suggests that not all quantization leads to flat minima? If so, why does the adopted quantization method can lead to flat minima?\\n\\n4. Experiments should be conducted in more data. The more realsitic WILDS dataset could be used to further show their effectiveness. Ablation studies shoube be conducted in more datasets rather than just PACS and TerraInc. Especially in Figure 3, where the upper two figures can barely support the claim.\\n\\n5. Some unclear experimental settings. For example, how to choose the quantizer step size $s$? what is the quantizer step in Figure 3, is it a same thing with $s$. If no, how to decide it? does the quantization only applied in saving the model?\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you!\", \"comment\": \"We sincerely appreciate your thoughtful feedback and acknowledgment of the clarifications provided. We value your suggestion to include a paragraph on how a model is trained for quantization and will incorporate this, along with additional details, into the implementation section of the manuscript.\\nIf you have any further questions, concerns, or areas where additional clarification could assist in your evaluation and potentially lead to a higher rating, we would be more than happy to provide detailed responses.\"}", "{\"comment\": \"Thank you for your thoughtful comments. We would like to address the concerns you raised.\\n\\n>**Combining several previous techniques into an existing application without decent analysis**\\n\\nWe acknowledge your perspective on our work. However, here\\u2019s why we think it should be accepted:\\n\\n1) Quantization is popular, but no one has explored its use for boosting domain generalization performance.\\n2) We explain why it works (flatter minima), how it works (adding noise = regularization), and how to achieve it (use QAT, not PTQ).\\n3) We deliver state-of-the-art domain generalization models without any bells and whistles.\\n\\n>**Overclaim on the theoretical evidence since there is no formal proof.**\\n\\nThe reviewer is correct that we do not provide a formal theoretical proof. As also noted in your original review, there may not be a formal proof that fits all existing methods. We appreciate this feedback and have adjusted our claims to better align with the evidence presented in the paper. Specifically:\\n\\n1) In the contributions section, we revised the second claim from \\u201cWe theoretically and empirically demonstrate that QAT encourages flatter minima\\u201d to \\u201cWe empirically demonstrate that QAT promotes flatter minima in the loss landscape and provide an analytical perspective to explain this effect.\\u201d\\n2) In Section 3.3, we discuss the flatness definition mentioned in Hochreiter & Schmidhuber (1997), which links to both, Definition 1 in [a] and Cha et al. (2021). More specifically, we added \\u201cSimilar to (Dinh et al., 2017; Cha et al., 2021), we interpret flat minima as \\u201ca large connected region in weight space where the error remains approximately constant,\\u201d as defined by (Hochreiter & Schmidhuber, 1997)\\u201c.\\n3) We modified \\u201ctheoretical insights\\u201d to \\u201canalytical perspective\\u201d in the abstract, introduction, and conclusion.\\n\\n[a] Sharp Minima Can Generalize For Deep Nets, in ICML'17\\n\\nWe sincerely thank the reviewer for this valuable feedback. We would greatly appreciate if the reviewer could clarify:\\n1) Whether our revised language resolves the issue of overclaiming.\\n2) If the three points we presented above are adequately justified in the paper.\\n3) If other concerns from your original review have been adequately addressed.\\n4) If there are other concerns or any particular analysis that would change your opinion on the paper.\"}", "{\"title\": \"Acknowledge the response\", \"comment\": \"Thanks the author for the response. After checking the revised manuscript, my main concern remains.\\n\\n1. I don't think there is much contribution for one to combine several previous techniques into an existing apllication without decent analysis.\\n\\n2. I'm also not convinced by the claim of theoretical connection between the flat minima and quantization. Flat minima have been formally defined in previous work [a], with either Definitions 1 or 2 providing a clearer explanation than the current text. To support the theoretical claim, the authors need to provide serious proofs based on the definition (rather than intuitive explanations) to show that quantization leads to flatter minima. The current form with a simple Taylor expansion is too weak for such a strong claim.\\n\\nFor the above reasons, I decide to maintain my rating.\\n\\n[a] Sharp Minima Can Generalize For Deep Nets, in ICML'17\"}", "{\"comment\": \"Thank you for the response, which has clarified most of my doubts. Having a paragraph introducing how a model is trained for quantization would be helpful.\\n\\nI would like to maintain my original score.\"}", "{\"comment\": \"We hope that our response have provided clarity and effectively addressed your concerns. We would greatly appreciate it if you could acknowledge this. If there are any remaining questions or unresolved concerns, we would be more than happy to provide further clarification. Thank you sincerely for your time and valuable feedback\\u2014it is greatly appreciated.\"}", "{\"comment\": \"We sincerely appreciate your recognition of the intriguing role of quantization noise in enhancing generalization, as well as your positive remarks on the organization of our work. Below, we address your major concerns in detail:\\n\\n>**W1: The methods for quantization and ensemble are quite standard and not specific to domain generalization, limited novelty**\\n\\nOur approach is the first to use quantization to address domain generalization challenges. We present novel analysis to show that quantization-aware training introduces an implicit regularization effect, guiding the model toward flatter minima and thus enhancing out-of-distribution (OOD) generalization. We appreciate that the reviewer acknowledges this as an ``interesting observation\\\", which further highlights that it brings novelty to the community. \\n\\nAdditionally, we introduce the Ensemble of Quantization (EoQ), which combines quantization and ensembling to amplify the benefits of both techniques. EoQ achieves state-of-the-art OOD performance while requiring minimal resource overhead, demonstrating its practical and methodological significance. Our work not only highlights a novel application of quantization noise but also provides empirical validation of its effectiveness across diverse datasets, setting it apart from standard techniques. We believe this to be of interest to the community.\\n\\n>**W2:How does your work reconcile the flatness-generalization relationship with recent critiques[1,2]?**\\n\\nThank you for highlighting this important discussion! \\n\\nTo better understand the conclusions of [1], we reached out to its authors, and Andriushchenko (the primary author) directed us to his post-ICML discussions (https://x.com/maksym_andr/status/1687395919442948096), where he acknowledges that sharpness (or its inverse, flatness) can still be a valuable measure of generalization. He states that their work ``doesn't imply that sharpness is useless, particularly since the empirical success of SAM is undeniable.\\\" This perspective aligns well with our findings. We do not claim that flatter minima are universally better, but instead demonstrate their utility in reducing sensitivity to domain shifts, as supported by experiments across multiple benchmarks.\\n\\nFurthermore, the work in [2] is orthogonal to our findings. It shows that, on certain datasets, applying SAM to normalization layers only can improve performance over applying it across the entire model. We added discussion around [1,2] in the revised manuscript to address the nuanced relationship between flatness and generalization.\\n\\n[1] Andriushchenko, Maksym, et al. \\\"A modern look at the relationship between sharpness and generalization.\\\" ICML, 2023. \\\\\\n[2] Mueller, Maximilian, et al. \\\"Normalization layers are all that sharpness-aware minimization needs.\\\" NeurIPS, 2023.\\n\\n>**W3: How does quantization noise compare to uniform weight noise and other weight perturbation schemes?**\\n>\\nQuantization noise and uniform weight noise share similarities in that both introduce perturbations to the model's parameters. However, quantization noise specifically arises from the discretization of the weights, which can lead to a more structured form of regularization due to the rounding or truncation during the quantization process. In contrast, uniform weight noise typically adds random perturbations with a uniform distribution, which may not exhibit the same structured regularization properties.\\n\\nBelow, we provide the results of our ablation study on the PACS dataset with uniform noise with different minimum and maximum value:\\n\\n| Noise | OOD Accuracy |\\n|------------------------------|-------------------|\\n| no noise | 84.7 \\u00b1 0.5 |\\n| uniform(-0.0001, 0.0001) | 82.8 \\u00b1 0.6 |\\n| uniform(-0.00005, 0.00005) | 83.9 \\u00b1 0.4 |\\n| uniform(-0.00001, 0.00001) | 84.9 \\u00b1 0.3 |\\n| uniform(-0.000005, 0.000005) | 85.4 \\u00b1 0.4 |\"}", "{\"summary\": \"The paper works on domain generalization problem through weight quantization, which can be an implicit regularize by inducing noise in model weights to guide the optimization process to flatter minima against domain shifts. The paper provides both theoretical and empirical evidence for quantization to encourage flatter minima. Experiments on several datasets demonstrate the generalization ability of the method across various datasets, architectures, and quantization algorithms.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n\\n2. The idea that improves the domain generalization ability by weight quantization is interesting. \\n\\n3. The experimental results demonstrate the effectiveness of the proposed method.\", \"weaknesses\": \"1. Equation 4 shows that the weights in flatten minima should have lower losses on the quantization model. However, it is not clear how the loss function guarantees that lower loss can in turn lead to local minima. It is also not clear how the large noise dominate the optimization and lead to sub-optimal convergence.\\n\\n2. As shown in the experiments, the model size will definitely be reduced through quantization, but how about the training and inference costs of the model? Especially when the quantization is conducted during the model training.\", \"questions\": \"1. How are the ensemble quantization models trained? The models are trained differently from the beginning or from the quantization step?\\n\\n2. How does the quantization step (e.g., 2000 for DomainNet) affect the model performance? How do the author select such hyperparameter?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely hope that our response has addressed your concerns and provided greater clarity. Your feedback is extremely valuable to us, and we would greatly appreciate it if you could kindly confirm whether we have adequately addressed your points. If you have any remaining questions or further concerns, we would be more than willing to provide additional clarification. Thank you once again for your time and feedback in helping us improve our work.\"}", "{\"comment\": \"Thanks for your responses. Most of my concerns have been addressed and I would like to maintain my original rating of 6.\"}", "{\"comment\": \"Thank you for appreciating the clarity of our writing, the novelty of leveraging weight quantization for domain generalization, and the strength of our experimental results. We address your major questions and concerns below.\\n\\n>**W1: Equation 4.**\\n\\nThank you for your comment. We acknowledge that our discussion of Equation 4 might have been unclear. In short, and as updated in the paper, Equation 4 shows that the noise induced by the quantization process will facilitate the optimization to escape from sharp minima and instead settle in flatter ones. There is however no guarantees that these flatter minima have a lower loss value than others or are global minima. We hope the revised text for Equation 4 clarifies what we meant.\\n\\n>**W2: How does large noise dominate optimization in Equation 4 and result in sub-optimal convergence?**\\n\\nThe quantization-induced noise $(\\\\Delta\\\\)$ in Equation 4 affects the loss function as follows:\\n\\n$L(w + \\\\Delta) \\\\approx L(w) + \\\\nabla L(w)^T \\\\Delta + \\\\frac{1}{2} \\\\Delta^T \\\\mathcal{H} \\\\Delta,$\\n\\nwhere $\\\\mathcal{H}$ is the Hessian matrix. The second and third terms in this approximation introduce a regularization effect: they penalize sharp minima (large eigenvalues of $\\\\mathcal{H}$) and encourage convergence to flatter regions of the loss surface. However, if the noise $(\\\\Delta\\\\)$ becomes too large, it introduces over-regularization. This excessive noise can overly restrict the search space, preventing the model from reaching a good solution. Instead, the optimization process may focus on minimizing the loss in a way that avoids sharp regions, but sacrifices the ability to find true minimum of the loss function.\", \"table_8_evidences_this_trade_off\": \"Moderate noise improves out-of-distribution (OOD) generalization, but excessive noise harms both in-domain and OOD performance. The noise perturbation introduces stochasticity into the optimization process, which biases the trajectory toward flatter minima, but too much noise destabilizes the process, leading to suboptimal convergence.\\n\\nWe have revised the text in the paper to clarify this.\\n\\n\\n>**W3: How does quantization affect model size, training costs, and inference costs?** \\n\\nQuantization reduces both the memory footprint and latency. For example, a ResNet-50 model running on an AMD EPYC 7302 processor achieves a latency of 34.28ms in full precision and 21.02ms with 8-bit quantization. While quantization can theoretically reduce training costs by enabling dynamic bit precision switching, our setup does not support this, so training time remains almost unchanged. However, inference time and model size are significantly reduced, as detailed in the results section of the paper.\\n\\n>**Q1: How are ensemble quantization models trained\\u2014differently from the beginning or only at the quantization step?**\\n\\nThe models are trained independently from initialization, using different random seeds to ensure diversity. We have clarified it in the updated manuscript.\\n\\n>**Q2: How does the quantization step affect the model performance? How do the author select such hyperparameter?**\\n\\nThe quantization steps were empirically determined. We added the ablation studies in the appendix to discuss the performance trends. Here is the ablation study performed on the PACS dataset:\\n\\n| Quantization Step | OOD Accuracy |\\n|-------------------|-------------------|\\n| No quantization | 84.7 \\u00b1 0.5 |\\n| 1000 | 86.2 \\u00b1 0.4 |\\n| 2000 | 87.8 \\u00b1 0.3 |\\n| 3000 | 86.9 \\u00b1 0.4 |\\n| 4000 | 85.1 \\u00b1 0.3 |\"}", "{\"title\": \"Thank you!\", \"comment\": \"Dear reviewer,\\nAs the discussion period nears its end, we kindly hope to receive your feedback today. We have carefully highlighted all revisions in red and would greatly appreciate your kind consideration before finalizing your recommendation.\\n\\nWe greatly value your input and hope the updates meet your expectations.\"}", "{\"summary\": \"This paper proposes quantization-aware training to improve models' domain generalization performance. Theoretical and empirical analysis show that quantization-aware training induces noise in model weights, which could guide the optimization process toward flatter minima that generalize better.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Introducing quantization to domain generalization, and drawing the potential link between quantization and flat minima is novel and helpful, as it could improve domain generalization performance as well as memory and computation efficiency.\\n2. Extensive experiments on DomainBed and empirical analysis show the effectiveness of QT-DoG.\", \"weaknesses\": \"1. The effect of $s$ should be discussed in more detail, as it plays an important role in the training and the theoretical analysis. Specifically, choosing a different $s$ of each channel in a layer should be justified. Ablation studies regarding $s$ can also be conducted to give a better understanding of the effect of $s$.\\n\\n3. Measuring sharpness by equation 5 needs more discussion. When the model weight under measurement has different scales, perturbing weight with the same $\\\\gamma$ does not serve as a fair way to get $w'$ around the local area of $w$. Relative flatness (Definition 3 in [1]) could be considered to address the dependency of sharpness measurement over scale of model weight.\\n\\n[1] https://arxiv.org/pdf/2001.00939\", \"questions\": \"1. How much does $w_q$ differ from $w$ in training?\\n2. Why each channel in a layer has a different scaling factor? How is the value of $s$ determined?\\n3. Given a fixed quantization bit $b$, how would $s$ influence the performance?\\n4. What is the scale of $w$ (e.g. norm) across different algorithms in section 3.3?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The submission proposes a new approach for addressing the domain generalisation problem, where one must train a model on data from several source domains with the goal of zero-shot generalisation to one or many target domains. The proposed approach leverages weight quantisation, and the success of this method is explained by appealing to flat minima. The submission also includes an exploration of ensemble approaches to the DG problem, showing that this improves performance.\\n\\nThe reviewers have expressed concern about the novelty of the proposed approach, as it is a relatively straightforward combination of existing quantisation and ensemble approaches. Moreover, the connection between flat minima and OOD generalisation performance is not made clear.\", \"additional_comments_on_reviewer_discussion\": \"There was some discussion between the authors and reviewers, but this did not result in much change in opinion.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \">**W5: Include more realistic datasets like WILDS, to further demonstrate the effectiveness of your method**\\n\\nFollowing your suggestion, we performed experiments with 7 bit quantization on two datasets from the WILDS benchmark. Due to time limitations, we were unable to test all possible configurations. However, if there is a specific dataset that you believe would significantly impact the results or your perspective, we can prioritize running those experiments. We will also include additional experiments on the WILDS dataset in the revised manuscript to further substantiate our findings. We utilized the same experimental settings as outlined in the WILDS benchmark repository and incorporated quantization into the training process. The results presented below confirm our findings on Domainbed [PACS, Terra, VLCS, Office, DomainNet] benchmark:\\n\\n| Dataset| Method| In-dist |OOD |Metric|\\n|-|--|-|-|-|\\n| Amazon| ERM | 71.9 (0.1) | 53.8 (0.8) | 10th percentile acc |\\n| Amazon| QT-DoG | 79.2 (0.5) | 55.9 (0.6) |10th percentile acc |\\n| Camelyon| ERM| 93.2 (5.2)| 70.3 (6.4) |Average acc|\\n| Camelyon| QT-DoG| 96.4 (2.1) | 78.4 (2.2) |Average acc|\\n\\n\\n>**W6: Why were the ablation studies conducted only on PACS and TerraInc, and how do you justify the claims in Figure 3, where the upper two figures may not seem sufficient to support your argument?**\\n\\nAblation studies are commonly conducted on a single dataset; however, we have provided results on two datasets for greater robustness. We specifically chose PACS and TerraInc to ensure a balanced evaluation, with one larger and one smaller dataset represented. If there is a specific dataset you would like us to include for further analysis, please let us know, and we will be happy to provide additional results.\\n\\nRegarding Fig. 3, there appears to be some misunderstanding. The upper two figures display in-domain validation accuracy plots. The intention behind showing these is to demonstrate that the in-domain validation accuracy remains high even after quantization. Additionally, the bottom plots show that the out-of-domain test accuracy is not only higher but also more stable for the quantized model compared to the non-quantized model. We have revised the caption for more clarity.\\n\\n>**Q1: how to choose the quantizer step size $s$? Is it a same thing as quantization step in Fig. 3?**\\n\\nWe opted for a learnable $s$ as it is considered best practice [1,2,3,4] in the field, allowing the model to adapt and optimize this parameter during training.\\n\\nIn Figure 3, the ``quantization step\\\" refers to the specific iteration at which quantization is applied during training.\", \"here_are_the_results_from_our_ablation_study_conducted_on_pacs_dataset_with_7_bit_quantization\": \"| Quantization Step | OOD Acc. |\\n|-|-|\\n| No quantization| 84.7 \\u00b1 0.5|\\n| 1000| 86.2 \\u00b1 0.4|\\n| 2000 | 87.8 \\u00b1 0.3|\\n| 3000 | 86.9 \\u00b1 0.4|\\n| 4000 | 85.1 \\u00b1 0.3|\\n\\nWe have included these results in the appendix for further reference.\\n\\n[1] Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance, ECCV 2022. \\\\\\n[2] Learnable Companding Quantization for Accurate Low-bit Neural Networks, CVPR 2021. \\\\\\n[3] Learned Step Size Quantization, ICLR 2020. \\\\\\n[4] LSQ+: Improving low-bit quantization through learnable offsets and better initialization, CVPRW 2020. \\n\\n>**Q2: Does the quantization only applied in saving the model?**\\n\\nTo clarify, we employ quantization-aware training (QAT), where quantization is applied after a certain number of training steps, and the model is subsequently trained with quantized weights. This approach results in quantized models that are not only smaller and faster but also exhibit enhanced generalization capabilities.\"}", "{\"comment\": \"Thank you for acknowledging the novelty of using quantization to achieve flatter minima and the robustness of our experimental results. We have addressed your questions and concerns below:\\n\\n>**W1/Q2: Why each channel in a layer has a different scaling factor? How is the value of $s$ determined?**\\n\\nIt is common practice[1,2,3] to use a learnable, channel-wise scaling factor $s$ because channels within a layer often exhibit varying activation and weight distributions. To account for these differences, channel-wise scaling factors are applied to normalize the perturbation $\\\\Delta$ per channel. However, to explore the impact of this choice, we conducted an ablation study where we set $s$ at the layer level, rather than on a per-channel basis. We see that Channelwise $s$ can lead to 1.5% accuracy as compared to layerwise $s$. The results of this experiment on the PACS dataset with 7 bit quantization are shown below:\\n\\n| scale | OOD Accuracy |\\n|-------------|-------------------|\\n| No quantization | 84.7 \\u00b1 0.5 |\\n| Channelwise | 87.8 \\u00b1 0.3 |\\n| Layerwise | 86.3 \\u00b1 0.5 |\\n\\n[1] Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance, ECCV 2022. \\\\\\n[2] Learnable Companding Quantization for Accurate Low-bit Neural Networks, CVPR 2021. \\\\\\n[3] Learned Step Size Quantization, ICLR 2020. \\\\\\n[4] LSQ+: Improving low-bit quantization through learnable offsets and better initialization, CVPRW 2020. \\n\\n>**W2: Measuring sharpness by equation 5 needs more discussion. When the model weight under measurement has different scales, perturbing weight with the same does not serve as a fair way to get around the local area of w'. Relative flatness (Definition 3 in [1]) could be considered to address the dependency of sharpness measurement over scale of model weight. [1] https://arxiv.org/pdf/2001.00939**\\n\\nThank you for sugggesting this. We find this idea compelling and would like to conduct a proof of concept to explore its potential in our analysis. We have contacted the authors of [1] and will try to incorporate this in our framework to further extend our empirical evaluation. \\n\\n[1] Relative Flatness and Generalization, NeurIPS 202.\\n\\n>**Q1: How much does $w_q$ differ from $w$ in training?**\\n\\nInitially, the difference is relatively large when the weights gets clipped to quantization levels, but it decreases as training progresses and then remains almost constant throughout the rest of the training. Overall, the difference is relatively small and stays almost constant during training.\\n \\n>**Q3: Given a fixed quantization bit $b$, how would $s$ influence the performance?**\\n\\nWe are not sure to understand the reviewer's question. To clarify, $b$ is pre-determined by the user and the channel-wise $s$ values are learned during training; as such, they can automatically adapt to the given $b$ value. \\n\\n\\n>**Q4: What is the scale of w (e.g. norm) across different algorithms in section 3.3?**\\n\\nFollowing the reviewer's suggestion, we measured the norm of the different $w$s in the network and observed no significant difference between the norms of quantized and non-quantized parameters, even across different datasets.\"}", "{\"comment\": \"We appreciate your recognition of our results' competitiveness and the efficiency of combining quantized models in ensembling. We have carefully considered your comments and have provided detailed responses to your queries below.\\n\\n>**W1: How does your approach of replacing standard ERM training with quantization and using an ensemble of quantized models offer a novel contribution to domain generalization, given that quantization and model-soup are already established techniques?**\\n\\n While it is true that quantization has been applied in ERM in prior work, our key contribution lies in repurposing quantization as an implicit regularizer specifically for domain generalization (DG). Unlike traditional uses of quantization for compression, we show that quantization noise helps guide optimization toward flatter minima, which enhances OOD generalization. For more clarity, we reworded our first contribution as ``We are the first to demonstrate that quantization-aware training, traditionally used for model compression, can serve as an implicit regularizer, with quantization noise enhancing domain generalization.\\\"\\n\\nMoreover, our ensemble of quantized models builds on model-soup but introduces a key innovation by combining quantization with ensembling in a way that reduces resource overhead while maintaining high performance. This fusion not only improves DG but also ensures scalability across diverse benchmarks. The combination of these techniques within the context of domain generalization, supported by extensive empirical validation, represents a solid and impactful contribution, which we believe to be of interest to the community.\\n\\n>**W2: How does the manuscript define flat minima, and what theoretical evidence supports the connection between quantization and flatter minima?**\\n\\nWe appreciate the reviewer\\u2019s feedback and acknowledge that the definition of flat minima needs to be more clearly presented. In the revised manuscript, we have clarified the discussion of flat minima. Flat minima refer to regions in the loss landscape where the loss remains relatively stable under small perturbations of the model parameters.\\n\\nThe theoretical connection is drawn in Equation 4, which relates the Hessian of the loss to the impact of quantization. Quantization noise acts as a perturbation, and the optimization process biases the model toward flatter regions where the Hessian eigenvalues are smaller, ensuring stability under these perturbations. We have revised the text to clarify this.\\n\\n>**W3: Equation 5 does not provide theoretical evidence to support $\\\\mathcal{F_\\\\gamma(W^q)<F_\\\\gamma(W)}$. How do you justify its inclusion, given the variety of quantization methods and settings?**\\n\\n\\nEquation 5 is not intended to provide a theoretical proof or establish a formal connection but rather to serve as an empirical demonstration, akin to the approach used in SWAD [1]. It illustrates that the loss landscape becomes flatter with the application of quantization. This observation is supported by the plots in Figure 2, where it is evident that quantized models consistently achieve flatter loss landscapes compared to their non-quantized counterparts. While the diversity of quantization methods and settings makes a universal theoretical framework challenging, our empirical findings consistently validate this behavior across the datasets and quantization methods we evaluated.\\n\\n[1] SWAD: Domain generalization by seeking flat minima. Cha et al., NeurIPS 2021.\\n\\n>**W4: Not all quantization methods lead to flat minima. Why does the adopted method succeed where others fail?**\\n\\n\\nThis observation aligns with our findings. We demonstrate that quantization-aware training (QAT) encourages a flatter minima, which is not guaranteed with post-training quantization (PTQ). In QAT, flatter minima is achieved by incorporating quantization noise during training, acting as an implicit regularizer and smoothing the loss landscape (see Equation 4). In contrast, PTQ primarily involves clipping the network weights without any subsequent retraining., so it does not bring the same effect. This difference is reflected in Table 2, where PTQ performs worse on out-of-domain data compared to QAT. We provide a detailed discussion of this distinction in Section 4.3.4. If further clarification is needed, we are happy to revise this explanation.\"}" ] }
EXaKfdsw04
StepProof: Step-by-step verification of natural language mathematical proofs
[ "Xiaolin Hu", "Qinghua Zhou", "Bogdan Grechuk", "Ivan Y Tyukin", "Oliver Sutton" ]
Interactive theorem provers (ITPs) are powerful tools for the formal verification of mathematical proofs down to the axiom level. However, their lack of a natural language interface remains a significant limitation. Recent advancements in large language models (LLMs) have enhanced the understanding of natural language inputs, paving the way for autoformalization—the process of translating natural language proofs into formal proofs that can be verified. Despite these advancements, existing autoformalization approaches are limited to verifying complete proofs and lack the capability for finer, sentence-level verification. To address this gap, we propose StepProof, a novel autoformalization method designed for granular, step-by-step verification. StepProof breaks down complete proofs into multiple verifiable subproofs, enabling sentence-level verification. Experimental results demonstrate that StepProof significantly improves proof success rates and efficiency compared to traditional methods. Additionally, we found that minor manual adjustments to the natural language proofs, tailoring them for step-level verification, further enhanced StepProof’s performance in autoformalization.
[ "Mathematical NLP", "Autoformalization", "Logic Reasoning" ]
Reject
https://openreview.net/pdf?id=EXaKfdsw04
https://openreview.net/forum?id=EXaKfdsw04
ICLR.cc/2025/Conference
2025
{ "note_id": [ "SuIosF945y", "ShlXeXZwj9", "HYVFuXH3nt", "FnpXqIveE0", "DLRR9JMeQS", "1BVL5hSnz5" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "decision", "meta_review" ], "note_created": [ 1730708116236, 1729685055426, 1730720652588, 1730672988197, 1737523834051, 1734560879639 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7361/Reviewer_EFnZ" ], [ "ICLR.cc/2025/Conference/Submission7361/Reviewer_rYkN" ], [ "ICLR.cc/2025/Conference/Submission7361/Reviewer_nEet" ], [ "ICLR.cc/2025/Conference/Submission7361/Reviewer_pXFu" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7361/Area_Chair_7Jdd" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes generating formal proofs from informal ones in a step-by-step manner and shows that it outperforms one-time proof generation.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"The paper applies the known idea that, for LLMs, the step-by-step logical reasoning works more well than one-step reasoning, to autoformalization, especially the formal proof generation.\"], \"weaknesses\": \"- It is convincing that the step-by-step reasoning works well, but I feel it natural and unsurprising because it has been proven that LLMs can better carry out step-by-step logical reasoning than one-step [1]. Thus, I'm unsure how significant the contribution of the paper, which seems to confirm step-by-step proof generation works more well than one-step generation, is.\\n- Furthermore, it seems that the paper is not the first to apply the step-by-step reasoning power of LLMs to formal proof generation. For example, LEGO-Prover [2] decomposes informal proofs into step-by-step informal proofs with sub-goals and then proves the generated sub-goals. Although the main aim of LEGO-Prover is to address growing libraries, the paper does not theoretically, empirically, qualitatively, nor quantitatively compare the proposed approach with such existing approaches that exploit the step-by-step reasoning ability of LLMs.\\n\\n[1] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, Denny Zhou: Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. NeurIPS 2022\\n\\n[2] Haiming Wang, Huajian Xin, Chuanyang Zheng, Zhengying Liu, Qingxing Cao, Yinya Huang, Jing Xiong, Han Shi, Enze Xie, Jian Yin, Zhenguo Li, Xiaodan Liang:\", \"lego_prover\": [\"Neural Theorem Proving with Growing Libraries. ICLR 2024\", \"Other comments and concerns\", \"Please insert spacing before citations, like \\\"the large language model(Zhao et al., 2023)\\\" --> \\\"... model (Zhao et al., 2023)\\\" in line 44. I suggest reviewing all the citations to avoid similar issues.\", \"l88 \\\"A lot of work has also shown that ...\\\" Please cite the papers showing it.\", \"l100 \\\"bert\\\" --> BERT?\", \"l106 \\\"DTV\\\" Please explain the abbreviation where it appears first.\", \"l110 \\\"Qinghua et al.\\\" No link to the citation.\", \"l155-160: It would be nice to explain the problem with concrete examples that enable the reader to easily find the issue in FULL-PROOF strategies.\", \"l161 \\\"max_new_tokens\\\" No explanation about this.\", \"l338 \\\"new index\\\" I can't find what this is.\"], \"questions\": [\"Is it possible to explain, prove, and/or demonstrate how the application of step-by-step reasoning in proof generation differs from or advances beyond the previous work like [1,2]?\", \"Table 2: What's \\\"Comments Rate\\\"?\", \"Table 3: $r_s$ is a step pass rate?\", \"l351 \\\"simple fitting\\\" Is the fitting necessary? It seems to mean one needs to tune the proofs.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a method for step-by-step verification of natural language mathematical proofs. Unlike traditional \\\"full-proof\\\" approaches, which formalize and verify entire proofs at once, the proposed method (\\\"StepProof\\\") breaks down proofs into smaller, sentence-level subproofs to perform more granular verification. This compositional approach also allows for an intereactive user experience as well as error handling by allowing verification of individual steps and retrying only failed steps without discarding the entire proof. An evaluation is shown that the StepProof strategy performs better than a full-proof strategy and two existing baseline approaches for auto formalization on the GSM8K dataset.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The compositional verification approach is sensible and seems relatively novel in the autoformalization domain (though similar approaches have been explored in many other domains). Its benefits are intuitive and clear as compared to generating entire proofs at once (detecting errors, not redoing entire proofs, increased robustness)\\n2. The experimental results do show core utility of the ideas with improvements over full proof strategy and baselines in terms of proof success/number of attempts, though there is much room for improvement of evaluation in various aspects. Bringing the study to use open source models like Llama and implementing baseline systems here is also admirable and can help progress the research area with broader accessibility.\\n3. The compositional approach not only has improved results for full automation, but also creates a foundation for a more interactive user experience (more fine grained feedback from steps verification, allowing user to change or improve individual steps or skip steps in the proof, etc). Though this aspect has not been directly evaluated with real users in this work, it is a good direction to take in the autoformalization domain to provide more control and assistance for users.\", \"weaknesses\": [\"1. Decomposition approach limitations. I am not sure about your approach of decomposing the informal proof into independent subpropositions. \\\"STEP-PROOF assumes each sentence in the proof is a verifiable sub-proposition\\\" - are you really just breaking by syntactic checks for sentences? What if you have a subproposition that is expressed in multiple sentences with dependencies or contextual information between them? Perhaps a better approach would be to try to use the LLM to explicitly and more intelligently decompose the informal proof into independent sub-propositions (or lemmas) as is commonly done in compositional approaches with LLMs. See e.g. (Tushar et al ICLR'23, Pourreza et al NeurIPS'23). This is particularly concerning: \\\"and made simple manual modifications to make the proof step more consistent with the proof requirement of StepProof\\\". Firstly, this shows that StepProof is not directly capable of handling arbitrary informal NL proofs. Secondly, you can provide much more details here in terms of what manual modifications were required (you have plenty space left in the page limit and unlimited space in the appendix). You can explain general classes of modifications that needed to be made, and also provide many samples of the modifications you made to help the reader judge how \\\"simple\\\" the modifications are.\", \"2. Evaluation limitations. Though showing core value to some degree, the main evaluation results do not show a very strong improvement (6.1% vs 5.3% on compositional vs direct strategy and 27.9% vs 25.3% in comparison with the best DTV baseline). These seem pretty marginal and may be within margin of random variations in experiments and LLM performance. Also, only one dataset (GSM8K) is used - not sure if this shows generality of the approach, especially given its assumptions of decomposition at the sentence level which would be good to test on more datasets. The number of attempts comparison between StepProof and baselines is interesting - 10 vs 64 attempts is a significant improvement for step proof. But can you clarify: are these the settings of the attempts parameter that you have chosen? Did the baselines actually require this many attempts or was their performance similar with fewer attempts? Perhaps a more explicit investigation of this would be helpful - e.g. a graph showing how the performance (accuracy) of both your system and baselines (y axis) increases or changes with the number of attempts (x axis).\", \"3. Presentation problems. Many errors, inconsistencies and presentation/organization issues make the paper difficult to read and follow. Please improve upon these. Some examples:\", \"\\\"The workflow of STEP-PROOF is illustrated in the left of Figure 1.\\\" - should be in the right\", \"E.g., \\\"As shown in Table 4.2\\\" should be \\\"Table 1\\\" and then again for the baselines is states \\\"In the baseline test as shown in Table 4.2\\\"! which should be \\\"Table 2\\\". Please check for such mistakes and organize the paper better.\", \"many references do not state the conferences where the papers have already been accepted - please improve the quality of references\", \"organization of the paper in terms of sectioning needs much improvement - especially in the evaluation section, you seem to switch focus to different aspects of evaluation abruptly in the next paragraph and it is pretty confusing - can you please organize different aspects into appropriate subsections (you seem to have plenty of space left in the page limit anyway).\", \"In the first paragraph of evaluation you state \\\"we conducted both strategy performance tests and baseline tests\\\" - please clarify that what you mean by strategy performance tests and what you mean by baseline tests - it took a while to understand what these meant after back and forth reading and it helps to clearly introduce the goals of the evaluation to the user in normal language without paper-specific terminology.\", \"please clarify in Table 1 that the proof pass rate is for ONE ATTEMPT and that in Table 2 it is for multiple attempts (it took a while to understand why the proof pass rate was low here as compared to TABLE 2)\"], \"referenced_related_works\": [\"Decomposed Prompting: A MODULAR APPROACH FOR SOLVING COMPLEX TASKS\", \"Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu,Kyle Richardson, Peter Clark, Ashish Sabharwal. In ICLR 2023\", \"Mohammadreza Pourreza and Davood Rafiei. Din-sql: decomposed in-context learning of text-to sql with self-correction. In NeurIPS 2023\"], \"questions\": \"1. Are all evaluations you have done with StepProof in fully automated manner? So there are no user interactions in these evaluations? Please clarify that if its the case.\\n2. Also, some evaluation of interactive features would be good, e.g. with user studies. \\n3. What is \\\"comments rate\\\" in table 2? Is it the amount of feedback from the verification system? What does the 100% for StepProof and 31.3% for DTV mean? Please include some discussion of this and how exactly it may be relevant.\\n4. Why did you limit the number of attmpts in step proof to be 10? (while other methods like DTV you have up to 64 attempts). What happens after 10 attempts? Does the system improve further, or gains diminish, or could there even be any degradation as well? \\n5. Can StepProof cause worse performance if the proof fails due to sentence level decomposition problems, while the fullproof may still work as it has the whole context?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces StepProof, a method designed to improve the verification of natural language mathematical proofs by breaking them down into smaller, verifiable subproofs. Unlike traditional autoformalization methods that only verify complete proofs, StepProof operates on a sentence level, enabling granular verification of each step. This approach aims to enhance the efficiency and accuracy of proof verification by targeting and resolving errors within individual steps, rather than regenerating entire proofs. Experimental results indicate that StepProof outperforms other approaches in proof success rates and efficiency. Minor manual modifications to the proofs, aligning them with step-level requirements, were also found to enhance the performance of StepProof.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Granular Verification Approach: StepProof\\u2019s method of breaking down proofs into verifiable sub-propositions is a notable advancement compared to traditional methods that require verifying complete proofs. This could allow for a more targeted approach to error correction by addressing specific subproofs.\", \"Improved Success Rates and Efficiency: The experimental findings indicate that StepProof improves both success rates and efficiency for proof verification over baseline methods. By enabling sentence-level verification, StepProof appears to make the autoformalization process more manageable and less resource-intensive, which could be beneficial for large-scale applications.\", \"Enhanced Usability: The authors found that the minor adjustments to natural language proofs tailored for step-level verification, which was made easier by the GUI interfaces, further enhance StepProof\\u2019s performance, suggesting practical guidance for users to maximize the method's effectiveness.\"], \"weaknesses\": [\"Unsubstantiated Claim of Advantage: The claimed advantage of StepProof\\u2019s selective error correction (where only erroneous steps are retracted rather than the entire proof) is not unique to StepProof. Interactive theorem provers (ITPs) inherently support stepwise correction, enabling users to fix specific errors without requiring a full retraction. Thus, StepProof\\u2019s advantage in this aspect appears overstated.\", \"Lack of Novelty in Stepwise Translation: Stepwise translation in autoformalization is not a new concept. Previous methods, like the DSP approach, have already implemented similar methodologies. These methods translate decomposed proof steps, whether generated by an LLM or provided by a human, indicating that StepProof may not be as innovative as claimed in this area.\", \"Overly Restrictive Assumptions: StepProof\\u2019s framework assumes that each sentence in a proof can be treated as an independent, verifiable sub-proposition, which limits its applicability. This subgoal-based approach does not align well with many natural logical structures in proofs, especially those involving complex logical dependencies or sequence reordering. As a result, StepProof might require significant manual adjustments for compatibility with common proof structures.\", \"Ambiguous Evaluation Methodology: The incremental verification of sub-propositions might count intermediary, potentially incorrect and mathematically misleading results as \\u201ccorrect\\u201d sub-proofs, leading to an inaccurate reflection of the method's overall success. This evaluation ambiguity calls into question the validity of the reported improvements in success rates.\", \"Inappropriate Benchmark Dataset: The authors\\u2019 choice of GSM8K as a benchmark dataset is unsuitable for evaluating proof autoformalization due to its relative simplicity and lack of complex logical structures. Datasets like ProofNet or MiniF2F would provide a more accurate measure of StepProof\\u2019s performance on challenging, real-world mathematical proofs, better reflecting its practical value in formalization tasks.\", \"Insufficient Detail on Methodology: It is also worth noting that further clarification on the prompting and output syntax for both StepProof and Full-Proof in the comparison experiments is needed. More specific details on the handling of the LLM\\u2019s guessed proof states could be valuable for assessing the validity and replicability of the reported performance differences between the two approaches.\"], \"questions\": [\"Consider Additional Benchmark Datasets: Suggest that the authors include additional, more complex datasets like ProofNet or MiniF2F in future evaluations.\", \"Highlight Distinctions from Prior Work on Stepwise Translation: Advise the authors to address the similarities between StepProof and prior stepwise translation methods.\", \"How does StepProof handle complex proof structures outside the subgoal-based framework?\", \"Could the authors elaborate on StepProof\\u2019s limitations in handling complex, non-linear proof structures? For example, how does it manage proofs that involve nested assumptions, indirect reasoning, or statements that need reordering for coherence?\", \"How does StepProof compare to Whole-Proof when using more informative LLM feedback?\", \"In Whole-Proof, does the LLM output proof states after each line, or is this feature limited to StepProof? Allowing Whole-Proof access to these proof states could potentially improve its success rate. Could the authors provide more details on the prompting strategies for both methods?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper describes a framework for granular, step-by-step verification of natural language reasoning.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"N.A.\", \"weaknesses\": [\"The paper is almost impossible to comprehend. I highly suspect a big portion of it is generated by LLMs.\", \"The full-proof strategy so baffling: I am not even sure what is formal backend and what is role of users here.\"], \"questions\": \"N.A.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The reviewers stated that defining verifiable sub-propositions of proofs may pose an advantage compared to traditional proof verification methods. The experimental results show potential superiority over baseline methods.\\n\\nHowever, the advantages over other methods are not clear, and the lack of novelty could not be disputed by the authors. Among other points, the reviewers state very restrictive assumptions and that more details on the evaluation are needed.\", \"additional_comments_on_reviewer_discussion\": \"Since the authors did not reply to the reviews, the discussion led to a clear rejection of this paper.\"}" ] }
EXXvBdFJ6I
On the Inflation of KNN-Shapley Value
[ "Ziao Yang", "Han Yue", "Jian Chen", "Hongfu Liu" ]
Shapley value-based data valuation methods, originating from cooperative game theory, quantify the usefulness of each individual sample by considering its contribution to all possible training subsets. Despite their extensive applications, we observe these methods encounter value inflation—while samples with negative Shapley values are detrimental, some with positive values can also be harmful. This challenge prompts two fundamental questions: the suitability of zero as a threshold for distinguishing detrimental from beneficial samples and the determination of an appropriate threshold. To address these questions, we focus on KNN-Shapley and propose Calibrated KNN-Shapley (CKNN-Shapley), a semi-value method that calibrates zero as the threshold to distinguish detrimental samples from beneficial ones by mitigating the negative effects of small-sized training subsets. Through extensive experiments, we demonstrate the effectiveness of CKNN-Shapley in alleviating data valuation inflation, detecting detrimental samples, and assessing data quality. We also extend our approach beyond conventional classification settings, applying it to diverse and practical scenarios such as learning with mislabeled data, online learning with stream data, and active learning for label annotation.
[ "Shapley Value", "Data Valuation", "KNN" ]
Reject
https://openreview.net/pdf?id=EXXvBdFJ6I
https://openreview.net/forum?id=EXXvBdFJ6I
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zc553Rg3Uy", "yyXWGWGFjE", "pNlOXRsNyP", "n9fPsVEGmz", "mSIuaLmXVR", "kYyg2YMZ9w", "hgLYo1OhBs", "gsNox74O7Y", "eB2L1lYNsy", "cWim83kpIV", "ayRvoe2F4R", "W2n1rQalvb", "UQ6isOqcxr", "TXYzjoymG0", "NnPJgVAkv5", "KTX2CSRuyQ", "F5ANBw96bC", "EovX7aFxmd", "BO0HKI5wHV", "7aJakV0Rew", "5Nv66SRNdw", "2RkbPgKS7S", "1VPqRnEZgN" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "official_comment" ], "note_created": [ 1732041096041, 1732040333581, 1732040375388, 1732038418576, 1732415158353, 1732423278096, 1730479656650, 1732041408680, 1732573173759, 1733134232645, 1732039414562, 1732315996937, 1730261800183, 1732573198327, 1737523718032, 1732325468768, 1733157824493, 1732375947671, 1732415123935, 1730635643621, 1734744726760, 1730837483040, 1732040773792 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5654/Authors" ], [ "ICLR.cc/2025/Conference/Submission5654/Authors" ], [ "ICLR.cc/2025/Conference/Submission5654/Authors" ], [ "ICLR.cc/2025/Conference/Submission5654/Authors" ], [ "ICLR.cc/2025/Conference/Submission5654/Authors" ], [ "ICLR.cc/2025/Conference/Submission5654/Reviewer_yTqt" ], [ "ICLR.cc/2025/Conference/Submission5654/Reviewer_anAQ" ], [ "ICLR.cc/2025/Conference/Submission5654/Authors" ], [ "ICLR.cc/2025/Conference/Submission5654/Authors" ], [ "ICLR.cc/2025/Conference/Submission5654/Reviewer_oxte" ], [ "ICLR.cc/2025/Conference/Submission5654/Authors" ], [ "ICLR.cc/2025/Conference/Submission5654/Reviewer_anAQ" ], [ "ICLR.cc/2025/Conference/Submission5654/Reviewer_yTqt" ], [ "ICLR.cc/2025/Conference/Submission5654/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5654/Authors" ], [ "ICLR.cc/2025/Conference/Submission5654/Authors" ], [ "ICLR.cc/2025/Conference/Submission5654/Reviewer_yTqt" ], [ "ICLR.cc/2025/Conference/Submission5654/Authors" ], [ "ICLR.cc/2025/Conference/Submission5654/Reviewer_oxte" ], [ "ICLR.cc/2025/Conference/Submission5654/Area_Chair_Koob" ], [ "ICLR.cc/2025/Conference/Submission5654/Reviewer_M8bW" ], [ "ICLR.cc/2025/Conference/Submission5654/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response (1/2) to Reviewer yTqt\", \"comment\": \"We would like to thank the reviewer for the deep analysis of our work, and the time, effort, and consideration. Below, we answer the questions raised by the reviewer:\\n\\n**Hyperparameter T** \\nThe CKNN variation indeed involves heuristic adjustments, with the primary motivation being to mitigate the inflation issue observed in KNN-based Shapley values. The hyperparameter $T$ is introduced as a means to exclude small, less reliable subsets from the evaluation process, as these subsets disproportionately contribute to value inflation. This heuristic is grounded in the intuition that small subsets often fail to represent meaningful contributions to the overall model performance. \\nThe $T$ value is a hyperparameter in CKNN-Shapley. Similar to selecting different values of $K$ which impacts accuracy, the choice of $T$ in CKNN-Shapley also varies depending on the dataset. In Figure 3, we explore various $T$ values, demonstrating that an appropriate $T$ setting can effectively improve the accuracy of the KNN classifier. Table 8 provides further details, where we show that setting \\\\( T = N - 2K \\\\) yields optimal results.\\n\\n**Ramifications** \\nTo clarify, in CKNN-Shapley, each subset effectively selects at least \\\\( T - 2K \\\\) points due to the design of \\\\( T = N - 2K \\\\). \\nThis choice relaxes the efficiency axiom of Shapley value, meaning that the sum of CKNN-Shapley values no longer equals the total accuracy, as it would in a standard Shapley value setup. We will further elaborate on these implications in future versions to enhance the clarity and motivation behind this heuristic.\\n\\n**Statistical tests** \\nStatistical tests demonstrate that CKNN significantly outperforms KNN in classification accuracy. Specifically, we performed paired t-tests (t = 2.909, p = 0.0196) and Wilcoxon signed-rank tests (statistic = 1.5, p = 0.0117) to compare the performance of our method with the second-best method on each dataset as presented in Table 2. The results indicate that the differences are statistically significant (p < 0.05), demonstrating the superiority of our method. This analysis highlights the consistent and significant improvements achieved by CKNN across multiple datasets.\\n\\n**Sensitive to $K$ and $T$** \\nThe role of hyperparameters is to control and adapt the method to specific datasets and tasks, and it is not inherently better for the results to be insensitive to them. If a hyperparameter had no impact, it would lack significance and fail to serve its intended purpose. Additionally, we ensured consistent and fair default settings for all experiments, and for hyperparameters deemed sensitive, we conducted experiments to analyze their effects. In our experiments, we provided the parameter analysis experiments and a recommended setting of $K$ and $T$. With the fixed setting, there is no sensitivity issue. Indeed, we acknowledge that in practical applications, these hyperparameters need to be tuned using the validation set to achieve optimal performance.\\n\\n**012** \\nThank you for highlighting this important point. We will revise the abstract and introduction in future versions to more clearly specify that our motivation is centered on the inflation issue in KNN-based Shapley value methods.\\n\\n**019** \\nSame with **Ramifications**.\\n\\n**046** \\nThank you for pointing out these issues. We acknowledge the errors in the references and appreciate your detailed feedback. We will revise the citations to correctly attribute the methods to their original sources.\\n\\n**134** \\nThank you for the suggestion. We put significant effort into designing Figure 1, aiming to present as much information as possible. This is why the figure includes a detailed caption to explain its elements. The reason we do not remove all bins to the left of the current bin is that our objective is to determine whether removing the current bin alone increases or decreases accuracy. This allows us to identify whether the samples in the bin are beneficial or detrimental to the model\\u2019s performance. This approach directly aligns with our research question, which is focused on understanding the impact of individual samples (bins) on accuracy and demonstrating the value inflation issue in KNN-based Shapley methods.\\n\\nHowever, we agree with your point that using consistent binning for the red and blue plots could improve clarity. In the next version of the paper, we will simplify the figure by removing the blue binning to enhance readability.\\n\\n\\n**153** \\nThe detrimental samples refer to those that, when removed, lead to an increase in KNN accuracy. Importantly, the evaluation of accuracy is conducted on the test set. Therefore, whether a sample is detrimental depends on its relationship with the training set, the test set, and the KNN classifier itself. Additionally, many inflation samples in KNN-Shapley are calibrated by CKNN-Shapley to have values closer to 0, which redistributes their contributions more realistically.\"}", "{\"title\": \"Response (2/3) to Reviewer oxte\", \"comment\": \"**Generalize beyond KNN**\\nWe invite Reviewer oxte to check Table 7 in the appendix, where we showed the results of applying CKNN-Shapley to clean the training set and applying other classifiers (MLP, LR, and SVM) on the clean training set for the learning task. These experiments highlight CKNN-Shapley\\u2019s general applicability beyond KNN models, demonstrating that it can effectively improve learning performance by removing detrimental samples even for non-KNN-based classifiers.\\n\\n\\n**Could the calibrated approach be adapted for other surrogate models in Shapley-based data valuation?** \\nWe believe the calibrated approach could indeed be adapted for other surrogate models in Shapley-based data valuation. In machine learning tasks, avoiding the use of small subsets is generally a good practice. However, we were unable to experimentally verify this hypothesis because existing Shapley value calculations in machine learning are computationally feasible only when KNN is used as the surrogate model. For other models, Shapley value calculations would require retraining \\\\( 2^n \\\\) times, making it computationally intractable. This limitation arises because KNN is a lazy learner and does not involve an explicit training process, allowing for more efficient evaluation.\\n\\n**Other semi-value** \\nWe have already compared CKNN-Shapley in previous answers, as shown in the Table 1. In this paper, we focus on KNN-Shapley based data valuation. If you believe additional methods should be compared, we kindly request that you provide relevant references or code for those approaches. This would help us ensure a fair and comprehensive comparison in future work.\\n\\n**Real-world Use Cases & Deployment in Production Settings** \\nIn this paper, we use the widely used benchmark datasets, which are all real-world datasets. If Reviewer oxte knows other public benchmark datasets, we would like to include them in our paper. For the deployment in production, this is beyond the scope of our paper.\\n\\n**Setting of Online Learning and Active Learning** \\nIn our online learning and active learning experiments, we use a KNN classifier for classification combined with data selection via CKNN-Shapley. In this scenario, there is no batch size parameter. Additionally, the sample removal threshold is set to 0. For dynamic environments, we recommend keeping the sample removal threshold at 0, consistent with the focus of our research. For other parameters, such as batch size, we suggest using the default settings.\\n\\n**Inflation Issues** \\nWe acknowledge that CKNN-Shapley effectively mitigates value inflation but does not completely eliminate it. For further improvement, we notice that the baseline method KNN-Shapley-JW performs better than KNN-Shapley in terms of value inflation, where KNN-Shapley-JW tackles the utility of the empty set over KNN-Shapley. Therefore, one potential direction to further mitigate the inflation can be the modification on the utility function.\\n\\n**Common Patterns or Features of Misidentifies** \\nWe provided an analysis on misidentified samples in Table 6 in Appendix C.2. Our CKNN-Shapley method exhibits significantly lower false positives compared to other prevalent methods. Since the over-identification of samples as detrimental (FP) is a primary source of value inflation in data valuation, the number of FP samples is much larger than the number of FN samples in all three methods. \\n\\n**Significant contributions** \\nOur research focuses on addressing the value inflation problem in KNN-Shapley while maintaining simplicity and computational efficiency, where KNN-Shapley addresses the data valuation and TKNN-Shapley considers the membership leakage issue of data valuation. Therefore, we have a different research question in our paper. Moreover, from the technical perspective, compared to KNN-Shapley and TKNN-Shapley, CKNN-Shapley introduces a calibrated mechanism to systematically reduce value inflation by mitigating the negative effects of small-sized training subsets. We believe that simplicity and effectiveness are core qualities of good research. CKNN-Shapley strikes a balance between addressing a key issue (value inflation) and lowering computational overhead, making it practical and accessible for broader applications.\\n\\n**More theoretical benefits and pitfalls need to be discussed in Section 3** \\nCould you please clarify which specific theoretical aspects or potential pitfalls you would like to see discussed in Section 3? Explicit suggestions would help us refine this section to better address your concerns.\"}", "{\"title\": \"Response (3/3) to Reviewer oxte\", \"comment\": \"**References**\\n[1] Ruoxi Jia, David Dao, Boxin Wang, Frances Ann Hubis, Nezihe Merve Gurel, Bo Li, Ce Zhang, Costas Spanos, and Dawn Song. Efficient task-specific data valuation for nearest neighbor algorithms. *International Conference on Very Large Data Bases Endowment*, 2019. \\n[2] Jiachen T. Wang and Ruoxi Jia. \\\"Data Banzhaf: A Robust Data Valuation Framework for Machine Learning.\\\" Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), 2023.\"}", "{\"title\": \"Response to Reviewer M8bW\", \"comment\": \"We would like to thank the reviewer for the deep analysis of our work, and the time, effort, and consideration. Below, we answer the questions raised by the reviewer:\\n\\n**Two-Step Procedures for Other Classifiers** \\nWe invite Reviewer M8bW to check Table 7 in the appendix, where we showed the results of applying CKNN-Shapley to clean the training set and applying other classifiers (MLP, LR, and SVM) on the clean training set for the learning task. These experiments highlight CKNN-Shapley\\u2019s general applicability beyond KNN models, demonstrating that it can effectively improve learning performance by removing detrimental samples even for non-KNN-based classifiers.\\n\\n**P values** \\nSince all methods compared in our study are based on the deterministic accuracy of KNN, there is no inherent randomness in the results. To show the significance, we performed paired t-tests (t = 2.909, p = 0.0196) and Wilcoxon signed-rank tests (statistic = 1.5, p = 0.0117) to compare the performance of our method with the second-best method on each dataset as presented in Table 2. The results indicate that the differences are statistically significant (p < 0.05), demonstrating the superiority of our method. This analysis highlights the consistent and significant improvements achieved by CKNN across multiple datasets.\\n\\n**Line 154** \\nThank you for noting this. We apologize for the formatting inconsistencies. The issues of line 154 will be addressed in the next revision.\"}", "{\"title\": \"Response to Reviewer yTqt's First Feedback (2/2)\", \"comment\": \"**Overfitting in Table 7**\\nKNN-Shapley is fundamentally a tool for data valuation, and its computation inherently requires a validation/test set to guide the process. However, overfitting to the validation/test set is not the focus of this paper. Instead, our work addresses the inflation phenomenon in KNN-Shapley and proposes CKNN-Shapley as a solution.\\n\\nTable 7 demonstrates CKNN-Shapley's improvement over other KNN-Shapley-based methods in the context of data valuation, rather than addressing potential overfitting issues on the validation set. The question you raise about separate training, validation, and larger test subsets is indeed an important one and represents an open challenge in the broader context of data valuation. If you have further thoughts or suggestions, we would be delighted to discuss them with you on this open question.\\n\\n---\\n\\n**Figure 1** \\nAgree!\\n\\n---\\n\\nWe are appreciative of Reviewer yTqt's effort in reviewing our paper and providing numerous questions. All questions are related and friendly, and some questions are close to the nature of KNN-Shapley and the routine setting, rather than the core of CKNN-Shapley. Here, we would like to point out that every method has its limitations and no one can address all limitations in one paper. We sincerely invite Reviewer yTqt to evaluate our paper by judging whether our **targeted** research question on inflation is meaningful and whether our proposed method can tackle our **targeted** inflation challenge. Thank you for your valuable time.\"}", "{\"title\": \"additional comments\", \"comment\": \"I have increased my score to 6 given the additional experimental results. For every seed and each of the three datasets, the new method yields better accuracy than the second-best method, so the improvement is clearly statistically significant.\", \"minor_point\": \"p = 0.020 for POL is surprising given that 0.963 is less than one standard deviation away from 0.949 \\u00b1 0.018, i.e., a naive z-score is less than one. In the final version, please describe exactly how the T-test was applied.\\n\\nThe central contribution is to use only the 2K nearest neighbors. No theory is needed, just a clear statement that this is the contribution, and a direct informal argument why it is better to use only a small number of nearest neighbors. \\n\\nI do not find the \\\"Root of inflation\\\" argument persuasive. Yes, \\\"none of the samples in the sub training set plays a decisive role in predicting the validation sample.\\\" But you haven't explained why that causes Shapley values to be too high, as opposed to merely noisy.\\n\\nI agree that \\\"overfitting to the validation/test set is not the focus of this paper.\\\" But Table 7 is the highlight for practitioners who may use the new method (it should be moved to the final version from the appendix), so the accuracy results in it must not be misleading. My request is to use an appropriate (routine and standard) training/validation/test methodology and to state this methodology clearly. This request is not a request for additional research.\"}", "{\"summary\": \"This paper considers of the promblem of data evalution methods. Specifically, for a Shapely-value based method (kNN Shapley Value), the paper finds that it has some positive value miscalculated for some harmful datasets, thus resulting misunderstanding for some data points. The paper proposes to limit the calculation of kNN Shapley values only on data set whose size is above a specific value.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper considers a problem setting (data evaluation) and a representative method (kNN Shapley value) that has high pratical importance.\", \"The paper is well motivated and the proposed approache is simple and effective.\", \"The paper also discusses the application of the proposed method in various applications.\"], \"weaknesses\": [\"The motivation seems only exist for the kNN based Shapley value, not other approaches of Shapley value. The sentence in abstract and introduction seems exaggerated that the problem exists for all Shapley value methods.\", \"The link between the motivation and the proposed method is vague. Specifically, does using the proposed $T$ value is designed to only solve the misclassified values? Is there probability that there are still misclassified values exist even using the probposed $T$? This also relates to the important problem of how to systematically assign the $T$ value, which is not theretically explored in the paper but just noted \\\"the choice of T is contingent upon the dataset characteristics\\\".\"], \"questions\": [\"What exactly does \\\"semi-value\\\" mean in the paper? It appears three times but does not a clear definition.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response (2/2) to Reviewer yTqt\", \"comment\": \"**174**\\nThank you for this insightful observation. We will clarify this intuition in future versions to improve the understanding of why CKNN-Shapley emphasizes the closest neighbors and downweights the distant points.\\n\\n**197** \\nWe will make sure to include a citation to [1] in future versions to properly reference this theoretical foundation.\\n\\n**231** \\nSee Appendix B.1. Following Wang et al. (2023), we also set the test data size to 10% of the training data size. As a result, the test data size for the *Pol*, *Wind*, and *CPU* datasets is 200, while the test sets for other datasets contain 1000 or 5000 samples.\\n\\n**273** \\nThe method of removing a fixed percentage of training points was suggested by reviewers in our previous submission. This approach originates from [2], and was included to enable more comprehensive and fair comparisons across multiple dimensions. \\n\\nRegarding the use of the same test set for both Shapley value calculation and accuracy evaluation, we argue that this is appropriate given the research topic of our paper. Our primary focus is on data valuation and addressing the issue of value inflation. Using the same test set ensures consistency and aligns with our research question by providing a direct connection between Shapley value calculations and their impact on classification accuracy. \\n\\n**298** \\nResults for $K = 3$ and $K = 1$ is in Table below:\\n\\n| $K \\\\backslash \\\\text{Datasets}$ | pol | wind | cpu |\\n|--------------------------------|-------|-------|-------|\\n| 3 | 0.970 | 0.940 | 0.950 |\\n| 1 | 0.965 | 0.910 | 0.965 |\\n\\n\\n\\nIn Figure 3B, results for $K = 10$ is 0.9000, which is also in line 239. \\nIn the Pol, Wind, and CPU datasets, \\\\( T = N - 2K \\\\) is calculated as \\\\( 2000 - 2 \\\\times 10 = 1980 \\\\), which aligns with the narrow range of T values shown in Figure 3C (1950 to 1990). We chose this range to closely examine the impact of T around the recommended setting. A broader range of T values is discussed in Appendix C.4, Table 8, where we explore how different T values affect performance across these datasets. \\n\\nSensitive and test set is too small same with **Sensitive to K and T** and **231**.\\n\\n**317** \\nThis point is discussed in section 3 from lines 244 to 255, where we explain how having the empirically best threshold closer to zero ensures consistency with Shapley value axioms by preserving efficiency, treating identical samples equitably (symmetry), and assigning zero to non-contributory samples (zero elements), thereby avoiding arbitrary truncations. \\n\\n**651 and 728** \\nThank you for pointing out the typo and error. We will correct it in the next revision.\\n\\n**757** \\nThank you for the suggestion. We agree that Table 7 is important as it demonstrates how CKNN-Shapley can be applied in practice, and we will consider moving it to the main paper in the next revision. Regarding statistical significance, it is important to note that KNN-based methods, including CKNN-Shapley, do not inherently involve statistical significance testing, as they rely on deterministic accuracy values. We will clarify this in the paper to address any potential confusion.\\n\\n\\n\\n**References** \\n[1] Dubey, Pradeep, Abraham Neyman, and Robert James Weber. Value theory without efficiency. *Mathematics of Operations Research*, 6(1), 122-128, 1981. \\n\\n[2] Kevin Jiang, Weixin Liang, James Y Zou, and Yongchan Kwon. *Opendataval: a unified benchmark for data valuation*. Advances in Neural Information Processing Systems, 2023.\"}", "{\"title\": \"Response to Reviewer yTqt's Additional Comments (1/2)\", \"comment\": \"We are very delighted to see the increased score. Appreciated! Below is our response for the follow up questions.\\n\\n\\n**p = 0.020 for Pol** \\nThank you for the comment. The p-value is derived from a paired t-test, which compares CKNN-Shapley against the second-best method for each seed. Unlike a z-score comparison based on means and standard deviations, the paired t-test focuses on the differences between paired observations across seeds, accounting for both the magnitude and consistency of the improvements. Notably, CKNN-Shapley outperforms the second-best method by a margin of at least 0.005 across all seeds, with differences up to 0.020 in some cases. This leads to a statistically significant p-value of 0.020, reflecting CKNN-Shapley's consistent advantage across seeds. We will describe the t-test more clearly in the final version.\\n\\n**Why that causes Shapley values to be too high** \\nWe focus on Eq. (2) in this paper, the inflation arises due to the following reasons:\\n\\n1. **The initialization of $\\\\( \\\\nu_k(z_{\\\\alpha_{i+1}}) \\\\)$ is inherently biased towards positive values.** \\n The Shapley value computation starts from \\n $ \\\\( \\\\nu_k(z_{\\\\alpha_N}) = \\\\frac{1[y_{\\\\alpha_N} = y_v]}{N} \\\\)$, \\n which is always non-negative. This positive bias propagates through the recursive computation and influences all subsequent values. The reason this initialization value is positive lies in the definition of Shapley values, which evaluate the marginal contribution of adding a single sample to a subset. Even when the subset is empty, any match in labels contributes positively to the value.\\n\\n2. **The cumulative effect of the adjustment term is biased towards retaining positive contributions.** \\n In most cases, the adjustment term \\n $\\\\( 1[y_{\\\\alpha_i} = y_v] - 1[y_{\\\\alpha_{i+1}} = y_v] \\\\)$ \\n equals 0 because both $\\\\( y_{\\\\alpha_i} \\\\neq y_v \\\\) and \\\\( y_{\\\\alpha_{i+1}} \\\\neq y_v \\\\)$ occur frequently in multi-class classification. This zero adjustment does not neutralize the initial positive bias, and any non-zero adjustment (though rare) is more likely to accumulate positive contributions than negative ones.\\n\\nTogether, these factors explain why the Shapley values are inflated rather than merely noisy, as the structure of the recursive computation inherently leans towards positive accumulation. We hope this addresses your concern, and we are happy to provide additional clarifications if needed.\"}", "{\"comment\": \"Thank the authors for the response. Some of my concerns still have not been addressed.\\nI give 5 as this paper is not good enough for ICLR.\"}", "{\"title\": \"Response (1/3) to Reviewer oxte\", \"comment\": \"We would like to thank the reviewer for the deep analysis of our work, and the time, effort, and consideration. Below, we answer the questions raised by the reviewer:\\n\\n**Scalability for very large or high-dimensional dataset** \\nWe would like to emphasize that the primary focus of our research is to **address the inflation issue in KNN-Shapley** (a well-known method in the field of data valuation), **rather than optimizing computational costs or specifically targeting high-dimensional datasets**. Our goal was to improve the data valuation by mitigating the inflation observed in KNN-Shapley, where the acceleration of our proposed CKNN-Shapley over KNN-Shapley is a byproduct. Additionally, we have demonstrated CKNN-Shapley\\u2019s effectiveness on real-world image and text datasets, including *CIFAR-10*, *AGnews*, *SST-2*, and *News20*, which include both high-dimensional data and large sample sizes. It is worthy noting that we follow the literature [1] to analyze the data valuation for deep models, where deep embeddings are taken as inputs for KNN-Shapley-based methods. For very large and high-dimensional datasets, techniques such as Locality Sensitive Hashing (LSH) can be utilized for faster KNN computation [1]. \\n\\nEvery method has its limitations, and no one can address all limitations in one paper. Here we sincerely invite Reviewer oxte to evaluate our paper by judging whether our **targeted** research question is meaningful and our proposed method can tackle our **targeted** challenge. Thank you! \\n\\n**Limitations contexts where KNN is less effective** \\nWe acknowledge that, inheriting from KNN-Shapley, CKNN-Shapley relies on KNN as a surrogate model, which may limit its applicability in some contexts. Again following the response from the above question, this potential limitation is not our primary research question. Our main focus is on mitigating the value inflation observed in KNN-Shapley.\\n\\n**Lack of compare with other semi-value methods or recent alternatives in data valuation** \\nSince retraining costs are prohibitive for various Shapley value-based semi-value methods, KNN-based approaches are indeed more suitable for practical use, where we also focus on this category. We have made every effort to include relevant KNN-based methods for comparison and even implemented KNN Beta Shapley ourselves. In response to your suggestion, we also implemented Banzhaf following [2], using KNN as the utility function to provide a more comprehensive analysis. The results are shown in the table below, indicating that our CKNN-Shapley achieves the best performance in terms of average accuracy by removing negative Shapley values:\\n\\n| Method\\\\Datasets | *MNIST* | *FMNIST* | *CIFAR10* | *Pol* | *Wind* | *CPU* | *AGnews* | *SST-2* | *News20* | **Avg.** |\\n|--------------------------|---------|----------|-----------|---------|---------|---------|----------|----------|----------|----------|\\n| Vanilla KNN | 0.9630 | 0.8444 | 0.5956 | 0.9400 | 0.8700 | 0.9200 | 0.9060 | 0.7270 | 0.6920 | 0.8287 |\\n| KNN-Shap# | 0.9682 | 0.8586 | 0.6456 | 0.9700 | 0.8750 | 0.9650 | 0.9250 | 0.8160 | 0.7580 | 0.8646 |\\n| KNN-Shap-JW# | 0.9698 | 0.8574 | 0.6514 | 0.9700 | 0.8650 | 0.9600 | 0.9250 | 0.8498 | 0.7610 | 0.8677 |\\n| TKNN-Shap# | 0.6644 | 0.8356 | N/A | 0.8150 | 0.8300 | 0.9100 | 0.8990 | 0.7982 | 0.6730 | 0.8032 |\\n| KNN-Beta Shap# | 0.9630 | 0.8444 | 0.5956 | 0.9400 | 0.8700 | 0.9200 | 0.9060 | 0.7270 | 0.6920 | 0.8287 |\\n| CKNN-Shap (Ours)# | **0.9828** | **0.8884** | **0.7164** | **0.9700** | 0.9000 | 0.9700 | **0.9420** | **0.8980** | **0.7960** | **0.8960** |\\n| KNN-Banzhaf | 0.9620 | 0.8440 | 0.5970 | 0.9650 | **0.9100** | **0.9750** | 0.9230 | 0.7718 | 0.7050 | 0.8503 |\\n\\nIf you believe additional methods would be beneficial for comparison, please feel free to recommend relevant literature, and we would like to add more comparisons.\\n\\n**Parameter T** \\nPractitioners should aim to select a relatively large *T*, and we recommend *T=N-2K*. We invite Reviewer oxte to check our detailed analysis on selecting the subset size parameter *T* in Figure 3 and Table 8 in Appendix C.4, where a large range of *T* is tested on all datasets.\\n\\n**Large, high-dimensional, and real-time data valuation** \\nWe believe some fast neighbor calculations by cluster analysis and dimension reduction techniques including deep embedding can be used to tackle very large and high-dimensional datasets. For real-time data valuation in dynamic environments, we cannot figure out a real scenario in practice. Since the data valuation aims to analyze the importance of training samples, where a retraining is needed with cleaned or weighted samples. Usually, the retraining takes a long time and is difficult to be real-time.\"}", "{\"comment\": \"I thank authors for their detailed feedback to all reviewers' comments.\\nMy concerns are resolved and would like to keep my score on acceptance, hoping to see rebuttals being reflected on revised manuscript.\"}", "{\"summary\": \"Note added on December 4: I appreciate the authors' additional comments dated November 25 and 26. These confirm my overall evaluation and score of 6. I would emphasize that the central idea is the heuristic to use only the 2K nearest neighbors, and the simplicity of this idea should not be obscured. I appreciate the informal argument why inflation exists when too many neighbors are used.\\n\\nThe updated Table 7 shows that, sadly but not surprisingly, there is no free lunch, i.e., no clearly better test accuracy across the board. The main conclusion from this table is that it is harmful to remove samples based on the TKNN method. The benefits of other methods are unclear. The test sets used are just too small, as is the number and variety of datasets.\\n\\n============\\n\\nShapley values can be used to measure the contribution of each point in a training set towards making a trained classifier more accurate. Ruoxi Jia et al. previously published an efficient method to calculate these Shapley values using k-nearest neighbor (kNN) classification. This paper shows empirically that those Shapley values tend to be too optimistic, and many data points with positive such values are actually harmful if used in training. The paper then proposes a heuristic improved version of kNN Shapley values, namely to use only the 2K nearest neighbors for each target point. Experimental results show that these new Shapley values lead to improved accuracy for trained classifiers.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The evidence for the phenomenon of inflation is believable. The experimental improvements in accuracy are large enough to be likely valid despite concerns about experimental methodology (test sets too small, not independent, no measure of statistical significance). The new heuristic kNN Shapley values can therefore be valuable in real-world applications.\", \"weaknesses\": \"The CKNN variation suggested for Shapley values is heuristic, with little evidence for any type of optimality. The paper should explain the motivation for introducing the hyperparameter T with more clarity and persuasiveness.\\n\\nThe paper should explain better that because T = N-2K is used, in fact the method picks only the 2K nearest points. This is intuitively a good idea, but its ramifications must be explained.\\n\\nIt is not clear which experiments used an independent test set. The paper should report the statistical significance of differences in accuracy, using McNemar tests or something similar. Many differences may be not significant, because the test set is too small. \\n\\nAccuracy achieved with the CKNN method is highly sensitive to hyperparameters K and T. This makes the method less useful in practice.\", \"questions\": \"012: Inflation is shown only for KNN-Shapley values, not for non-KNN values. So this sentence in the abstract is too strong.\", \"019\": \"It is misleading to mention small-sized training subsets, because with T = N-2K, in fact the training subset that is used for each target point is very small, of size 2K only. The benefit is that for each target point, this small subset is focused.\", \"046\": \"The reference (Jia et al., 2019b) to \\\"Towards Efficient Data Valuation Based on the Shapley Value\\\" is not correct, because KNN-Shapley does not appear in this paper. The reference to \\\"Scalability vs. utility: Do we have to sacrifice one for the other in data importance quantification?\\\" (CVPR, 2021) is also inappropriate, because the method is due to page 9 of \\\"Efficient task-specific data valuation for nearest neighbor algorithms\\\" (VLDB, 2019) .\\n\\nAlso, the formatting of the bibliography must be fixed, so that labels such as \\\" (Jia et al., 2019b)\\\" are visible in the PDF.\", \"134\": \"Figure 1 would be more clear using the same binning for the red and blue plots. The redline shows accuracy with a single bin removed. It would be more intuitive to remove this bin and also all bins to its left.\", \"maybe_more_important\": \"The red line measures accuracy. This should be measured on an independent test set. The points that are removed are those that have a negative or small Shapley value as measured on the training set. It is not surprising that removing these points increases accuracy on the training set, because they have been identified as detrimental on precisely this set. The more interesting question is whether they are also detrimental on a separate i.i.d. test set.\", \"153\": \"The green region contains about half the entire dataset. Discuss why half of all points are harmful. It is not surprising if a few points are harmful, for example because their training labels are wrong. But it is surprising that so many points are. Is this true only because \\\"harmful\\\" is relative to the training set itself? Many fewer points may be harmful when the test set is independent.\", \"174\": \"It is not obvious that the first term is impactful, because it is small, since its denominator is N, which is much larger than K. Here is an alternative intuition: The reason to ignore the early terms is that their points are the most distant from the test point, so they are the most irrelevant to it. Even if they have the same label, that is accidental, so whether their labels are the same merely introduces noise. Moreover, the second and later terms include the denominator K << N, so they are much more heavily weighted than the first term.\", \"197\": \"Define \\\"semi-value.\\\"\", \"231\": \"In Table 2, why are so many accuracies multiples of 0.01 or of 0.005? Because test sets contain only 100 or 200 data points? If this is true, then differences in accuracy are not statistically significant.\", \"273\": \"Why remove a fixed % of training points, when the advantage of CKNN is supposedly that the threshold zero tells us approximately how many points to remove?\\n\\nClarify the meaning of \\\"KNN\\u2019s performance trained on the training set excluding samples\\\". A fixed separate i.i.d. test set should be used. If accuracy is also measured on the training set excluding samples, this is not meaningful.\", \"298\": \"In Figure 3B, results for K < 5 must be included, since K=5 is best for the Wind dataset. The numerical values for Wind in Table 2 (0.8950 and 0.9100) are different from those in Figure 3B; why?\\n\\nIn Figure 3C, T varies in the range 1950 to 1990. Why this range? Why is it so narrow? Explain how it is consistent with the recommendation T = N - 2K on line 377. \\n\\nMore broadly, Figures 3B and 3C show that accuracy is highly sensitive to the choice of K and T. Why? Because training and test set sizes are small.\", \"317\": \"One goal of the CKNN method is for the empirically best threshold to be closer to zero. But why is this an important goal? Instead of the hyperparameter, we could use a validation set to determine the value of a non-zero threshold for standard kNN Shapley values, such as 0.8e-4 in Figure 1.\", \"note\": \"I have not evaluated Section 5 carefully.\", \"651\": \"Typo.\", \"728\": \"Error in table title.\", \"757\": \"Table 7 is important and should be in the main paper, because this is how CKNN-Shapley might be used in practice. The same criticisms as above apply: test sets should be bigger and statistical significance should be reported.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer yTqt's Additional Comments (2/2)\", \"comment\": \"**Table 7**\\nThank you for your feedback and for emphasizing the importance of Table 7 for practitioners. To address your request for clarity and standard methodology, we have conducted additional experiments following a routine training/validation/test split. These experiments evaluate the generalizability of CKNN-Shapley on other classifiers, with both validation and test results presented. The updated results are shown in the table below:\\n\\n| **Method \\\\\\\\ Datasets** | **Pol (Val)** | **Pol (Test)** | **Wind (Val)** | **Wind (Test)** | **CPU (Val)** | **CPU (Test)** |\\n|-------------------------------------|---------------|----------------|----------------|-----------------|---------------|----------------|\\n| Vanilla MLP | 0.9909 | 0.9725 | 0.8735 | 0.8795 | 0.9475 | 0.9565 |\\n| MLP with negative KNN-Shapley value samples removed | **0.9949** | **0.9585** | 0.8939 | 0.8930 | 0.9455 | 0.9705 |\\n| MLP with negative TKNN-Shapley value samples removed | 0.8515 | 0.8185 | 0.8269 | 0.8550 | 0.9085 | 0.9535 |\\n| MLP with negative CKNN-Shapley value samples removed | 0.9915 | 0.9355 | **0.8979** | **0.8830** | **0.9669** | **0.9570** |\\n| Vanilla LR | 0.8700 | 0.8650 | 0.8500 | 0.8700 | 0.9300 | 0.9500 |\\n| LR with negative KNN-Shapley value samples removed | 0.8800 | 0.8550 | 0.8800 | 0.8850 | 0.9400 | **0.9750** |\\n| LR with negative TKNN-Shapley value samples removed | 0.8250 | 0.8150 | 0.8250 | 0.8600 | 0.9200 | 0.9600 |\\n| LR with negative CKNN-Shapley value samples removed | **0.8850** | **0.8750** | **0.9350** | **0.8800** | **0.9600** | 0.9650 |\\n| Vanilla SVM | **0.9650** | **0.9500** | 0.8750 | **0.9000** | 0.9400 | 0.9650 |\\n| SVM with negative KNN-Shapley value samples removed | 0.8650 | 0.9300 | 0.8850 | 0.8800 | 0.9350 | 0.9700 |\\n| SVM with negative TKNN-Shapley value samples removed | 0.8450 | 0.8200 | 0.8250 | 0.8550 | 0.8900 | 0.9600 |\\n| SVM with negative CKNN-Shapley value samples removed | 0.9600 | 0.9200 | **0.8850** | 0.8900 | **0.9450** | **0.9750** |\\n\\nThese results demonstrate that CKNN-Shapley achieves the best performance in most cases across validation and test datasets, though not in all cases. This indicates the good generalization of our CKNN-Shapley, which inherits from the KNN-Shapley framework. Although it is not our research focus, the results demonstrate the usability in practice. Note that the similarity between validation and test distributions has a significant impact on test results. Such a relationship highlights the importance of carefully aligning validation and test distributions, a principle that serves as a foundation in machine learning.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We are more than happy to know that we have successfully addressed all of your concerns and you are willing to recommend our paper for acceptance. We would like to kindly remind you that an acceptance requires at least a score of 6, but the current score is 5. Additionally, we sincerely hope you can support our paper during the reviewer-AC discussion phase.\"}", "{\"comment\": \"We would like to grasp the last minute to learn from the reviewer to improve the quality of our paper and address your concerns.\"}", "{\"title\": \"thank you for the clarifications\", \"comment\": \"Dear authors: Thank you for considering my comments carefully. I will keep my score unchanged at 5, because I feel that the responses do not change the overall strengths and weaknesses of the submission.\\n\\nThe central issue is that with T = N-2K, the idea is to use only the 2K nearest neighbors for each point. For clarity, it would be better to define T' = N-T and then use T' in discussions.\\n\\nUsing T' = 2K is a heuristic idea that should not be obscured with theoretical arguments that do not apply precisely. Instead, can you answer the basic question *why* standard kNN Shapley causes so much inflation?\\n\\nAnother central problem is that test sets of size 100 or 200 are too small to get accuracy differences that are statistically significant. It doesn't matter whether a method is deterministic, or what previous papers did. What matters is that a randomly different test set might give a different ranking of alternative methods.\\n\\nThe practical application of the method is to do thinning of a dataset, and then to apply a different learning method, as reported in Table 7. To evaluate this (good!) application, it is necessary to have separate training, validation, and (larger) test subsets. Otherwise, results can be due to overfitting.\\n\\nFigure 1 is too complicated; \\\"aiming to present as much information as possible\\\" is not desirable.\"}", "{\"title\": \"Response to Reviewer yTqt's First Feedback (1/2)\", \"comment\": \"**Central issue**\\nWe would appreciate clarification on why \\\\( T' = N - T \\\\) is considered the central issue. Could you please advise?\\n\\n\\n---\\n\\n**Theoretical arguments** \\nDoes Reviewer yTqt expect some theoretical analysis on the hyperparameter \\\\( T \\\\)? To our best knowledge, we do not see any theoretical analysis on \\\\( K \\\\) in KNN; instead, some empirical settings are used or adapted according to various datasets and scenarios. Note that we use a fixed hyperparameter in all experiments. If Reviewer yTqt knows some nice works along this direction, we are happy and eager to learn and test them in practice!\\n\\n---\\n\\n**Root of inflation** \\nInflation arises from contributions calculated on smaller subsets during Shapley value estimation, which lead to inflated valuations. Given the complete training set, we can find the \\\\( K \\\\) neighbors to a validation sample. Consider a sub training set that does not include any of the above \\\\( K \\\\) neighbors, the standard KNN Shapley will involve such cases in calculation. However, none of the samples in the sub training set plays a decisive role in predicting the validation sample. We provide the explanation in Section 4. By constraining the size of sub training sets, we can avoid such sub training sets. As shown in Table 8, the inflation becomes more pronounced when \\\\( T \\\\) is smaller, which supports our conclusion.\\n\\n---\\n\\n**Small test sets** \\nOur experiments include datasets with larger test sets, such as MNIST, FMNIST, CIFAR-10, AGnews, SST-2, and News20, which address the concern about test set size. Note that the smaller test sets of size 100 or 200 were not chosen arbitrarily but rather to align with the settings used in prior work for fair comparisons.\\n\\nTo further address Reviewer yTqt's concern, we follow the suggestion and conduct additional experiments by generating five new test sets using different random seeds. Under the \\\"remove negative\\\" setting, we recompute the accuracy on these new test sets. The results are shown in the tables below:\\n\\n---\\n\\n### Table: Classification performance of different methods across five seeds on the POL dataset\\n| Method\\\\\\\\Seed | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Avg \\u00b1 Std |\\n|--------------------|---------|---------|---------|---------|---------|---------------|\\n| Baseline Accuracy | 0.935 | 0.910 | 0.925 | 0.925 | 0.925 | 0.924 \\u00b1 0.008 |\\n| KNN-Shapley | 0.955 | 0.915 | 0.950 | 0.960 | 0.965 | 0.949 \\u00b1 0.018 |\\n| TKNN-Shapley | 0.845 | 0.765 | 0.790 | 0.785 | 0.835 | 0.804 \\u00b1 0.031 |\\n| KNN-JW-Shapley | 0.940 | 0.925 | 0.955 | 0.955 | 0.845 | 0.924 \\u00b1 0.041 |\\n| **CKNN-Shapley** | **0.970** | **0.945** | **0.960** | **0.970** | **0.970** | **0.963 \\u00b1 0.010** |\\n\\n---\\n\\n### Table: Classification performance of different methods across five seeds on the WIND dataset\\n| Method\\\\\\\\Seed | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Avg \\u00b1 Std |\\n|--------------------|---------|---------|---------|---------|---------|---------------|\\n| Baseline Accuracy | 0.805 | 0.845 | 0.900 | 0.855 | 0.845 | 0.850 \\u00b1 0.030 |\\n| KNN-Shapley | 0.835 | 0.850 | 0.890 | 0.855 | 0.865 | 0.859 \\u00b1 0.018 |\\n| TKNN-Shapley | 0.785 | 0.760 | 0.885 | 0.805 | 0.770 | 0.801 \\u00b1 0.045 |\\n| KNN-JW-Shapley | 0.845 | 0.855 | 0.895 | 0.845 | 0.870 | 0.862 \\u00b1 0.019 |\\n| **CKNN-Shapley** | **0.895** | **0.890** | **0.925** | **0.910** | **0.900** | **0.904 \\u00b1 0.012** |\\n\\n---\\n\\n### Table: Classification performance of different methods across five seeds on the CPU dataset\\n| Method\\\\\\\\Seed | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Avg \\u00b1 Std |\\n|--------------------|---------|---------|---------|---------|---------|---------------|\\n| Baseline Accuracy | 0.930 | 0.935 | 0.915 | 0.940 | 0.895 | 0.923 \\u00b1 0.016 |\\n| KNN-Shapley | 0.925 | 0.925 | 0.930 | 0.945 | 0.905 | 0.926 \\u00b1 0.013 |\\n| TKNN-Shapley | 0.885 | 0.900 | 0.885 | 0.900 | 0.865 | 0.887 \\u00b1 0.013 |\\n| KNN-JW-Shapley | 0.945 | 0.930 | 0.935 | 0.945 | 0.925 | 0.936 \\u00b1 0.008 |\\n| **CKNN-Shapley** | **0.950** | **0.950** | **0.950** | **0.965** | **0.935** | **0.950 \\u00b1 0.009** |\\n\\n---\\n\\nAcross the three datasets (*POL*, *WIND*, and *CPU*), the CKNN-Shapley method consistently achieves the highest accuracy across all seeds, outperforming the best other methods (with the highest accuracy for each seed). Paired t-tests confirmed the statistical significance of this improvement, with p-values of 0.020 for POL, 0.0025 for WIND, and 0.0070 for CPU, all below the 0.05 threshold. These results demonstrate that CKNN-Shapley not only delivers consistently superior performance across different seeds but also establishes a clear advantage over other methods in diverse classification tasks.\\n\\n---\"}", "{\"summary\": \"The paper aims to address the issue of value inflation in Shapley value-based data valuation methods. The proposed Calibrated KNN-Shapley (CKNN-Shapley) is to recalibrate the threshold to distinguish detrimental from beneficial samples, aiming to correct inflation that misidentifies some harmful samples as beneficial. CKNN-Shapley implements constraints on training subset sizes to mitigate inflation effects, which arise from the improper selection of small-sized subsets in the original KNN-Shapley approach. Through experiments, CKNN-Shapley demonstrates improved performance in classification and robustly adapts to applications like mislabeled data handling, online learning, and active learning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"(1) The paper introduces Calibrated KNN-Shapley (CKNN-Shapley) as a novel solution to address value inflation in data valuation using KNN-Shapley. This approach is significant as it recalibrates the threshold, effectively distinguishing between beneficial and detrimental samples, which is critical for robust data valuation.\\n(2) The paper conducts extensive experiments across various benchmark datasets, demonstrating CKNN-Shapley\\u2019s ability to outperform traditional KNN-Shapley and its variants. By testing on both image and text data, the paper validates CKNN-Shapley's broad applicability.\\n(3) CKNN-Shapley offers computational efficiency by directly assigning zero to certain sample subsets, reducing the recursive calculations required by the original KNN-Shapley. This efficiency makes it feasible for larger datasets, addressing a significant limitation of Shapley value-based methods in general.\\n(4) The paper\\u2019s calibration method and constraints on subset sizes provide a theoretical foundation that enhances the interpretability of data valuations. By setting a meaningful zero threshold and ensuring subsets closely resemble the original dataset, CKNN-Shapley offers more reliable and interpretable sample valuations.\", \"weaknesses\": \"(1) Despite improvements over traditional Shapley calculations, CKNN-Shapley still incurs notable computational costs, particularly on large datasets or complex deep learning tasks. While more efficient than the original KNN-Shapley, the method may not yet be scalable for very large or high-dimensional datasets without further optimization.\\n(2) CKNN-Shapley, like KNN-Shapley, relies on K-Nearest Neighbors as a surrogate model, which may limit its applicability to contexts where KNN is less effective. This\\nassumption may restrict CKNN-Shapley\\u2019s performance in tasks with more complex decision boundaries, where a KNN-based approach may not be ideal.\\n(3) The paper does not extensively explore or benchmark CKNN-Shapley against other semi-value methods or recent alternatives in data valuation, like Banzhaf values or other cooperative game-theoretic approaches. This lack of comparison limits understanding of CKNN-Shapley\\u2019s relative effectiveness and could benefit from further analysis of its competitive positioning.\\n(4) While CKNN-Shapley is tested on simulated scenarios like mislabeled data and online learning, the paper lacks a real-world deployment case study to validate its robustness. Moreover, CKNN-Shapley\\u2019s sensitivity to hyperparameters in dynamic environments, such as varying batch sizes in streaming data, remains under-explored.\", \"questions\": \"This paper proposed Calibrated KNN-Shapley (CKNN-Shapley), a semi-value method that calibrates zero as the threshold to distinguish detrimental samples from beneficial ones by mitigating the negative effects of small training subsets, addressing the value inflation issue observed in KNN-Shapley. The novelty is weak as it\\u2019s an improved version of KNN-Shapley. The detailed questions are listed below:\\n-\\nCould you provide more detailed guidelines or heuristics for selecting the subset size parameter T across different datasets? Specifically, how sensitive is CKNN-Shapley to T, and what considerations should practitioners have in choosing an appropriate threshold?\\n-\\nWhile CKNN-Shapley offers improvements in computational efficiency over traditional KNN-Shapley, what optimizations could make CKNN-Shapley scalable for very large or high-dimensional datasets? Could further modifications make CKNN-Shapley feasible for real-time data valuation in dynamic environments?\\n-\\nSince CKNN-Shapley relies on the KNN classifier as a surrogate model, how would CKNN-Shapley perform in contexts with complex decision boundaries or on tasks where KNN is not ideal? Is there a potential to generalize this approach beyond KNN, or could the calibrated approach be adapted for other surrogate models in Shapley-based data valuation?\\n-\\nDid you consider benchmarking CKNN-Shapley against other semi-value or cooperative game-theoretic approaches, such as Banzhaf values or alternatives in data valuation? If so, how does CKNN-Shapley perform relative to these methods, and in what contexts is it most advantageous?\\n-\\nGiven the paper\\u2019s focus on simulated scenarios (e.g., mislabeled data, online learning, active learning), how would CKNN-Shapley perform in real-world applications where data is inherently noisy and highly variable? Are there specific real-world use cases (e.g., medical, financial data) where CKNN-Shapley has been tested, or do you have any recommendations for its deployment in production settings?\\n-\\nIn your online learning and active learning experiments, how sensitive is CKNN-Shapley to hyperparameters like batch size and sample removal thresholds? How would you recommend setting these parameters for optimal performance in dynamic environments with streaming data?\\n-\\nCKNN-Shapley addresses value inflation effectively, yet inflation issues still exist to some extent. Are there additional strategies or constraints you would suggest for further reducing value inflation, particularly for beneficial samples that might still be overvalued?\\n-\\nIn your experiments, how do you characterize the types of samples that CKNN-Shapley misidentifies, and are there common patterns or features among them? Could further insights into these misidentified samples help refine CKNN-Shapley or improve its robustness?\\n-\\nWhat are the significant contributions and connections of compared with existing studies for KNN-Shapley such as Threshold KNN-Shapley (Wang et al. 2023), KNN-Shapley (Wang and Jia 2023)?\\n-\\nMore theoretical benefits and pitfalls need to be discussed in Section 3. How does CKNN-Shapley method work needs to be comprehensively elaborated in this section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper introduces an improvement for computing the KNN-Shapley value which can be used for assessing the importance of individual training examples.\\n\\nThe Reviewers have underlined that the paper is well-written and self-contained. The solution is rather simple and incremental, but with a significant impact on applicability of the KNN-Shapley approach. The experimental results are also extensive, showing improvements of the proposed modification over the original approach. \\n\\nNevertheless, the message of the paper and arguments used are often misleading. The Authors seem to motivate their approach by analyzing training errors on subsets of training data (Figure 1). This calls for explanation and justification as \\\"this should be measured on an independent test set.\\\" When presenting the results on test sets, it still seems \\\"harmful to remove training examples based on [the proposed] method.\\\" Moreover, the test sets used seem to be too small, as well as its number and variety, so the final conclusions are not clear. One should also underline that the paper concerns a topic of rather limited audience. \\n\\nThe paper is a borderline and as an AC I needed to properly weigh the strengths and the weaknesses in order to make the final decision. The insights and the proposed solution are indeed interesting and worth attention. Nevertheless, the paper can be significantly improved to send a clear message without any controversy. Therefore, I have decided to reject it.\", \"additional_comments_on_reviewer_discussion\": \"The discussion was very intensive. The Authors were able to deliver additional empirical results and to clarify many doubts of the Reviewers. In result, the Reviewers have increased their initial scores to make the paper a borderline.\"}", "{\"summary\": \"There have been recent work focussed on providing tools to explore, quantify and curate data sets used for training learning algorithms. Several of the of widely used existing algorithms are based on the calculation of Shapley values from cooperative game theory.\\nThe main difficulty for calculating Shapley values comes with the fact that it involves calculation involving training models with 2\\u02c6N combinations of the training data which makes it intractable in real-life applications. In order to overcome this difficulty, a more practical variation have been proposed (2019,2021) KNN -Shapley that enables the calculation of shapley values with a O(NlogN) complexity. The new calculation takes advantage of the nature and simplicity of the lazy NN algorithm as a surrogate substitution for more complex learning algorithms. \\n\\nThe KNN algorithm can be greatly affected from the issue of inflation, which calls for a proposed variation by the authors call calibrated KNN-Shapley or CKNN-Shapley that the authors show that deals efficiently with the aforementioned inflation problem.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1) The paper is very well written,very well explained and self-contained. I was not familiar with the KNN shapley technique and was able to catch up quickly by reading the paper and looking at the provided references. The plots are very helpful as well.\\n\\n2) The solution is proposed is very simple and arguable incremental but I think the impact of the small change makes the KNN-Shapley algorithm significantly better when using it in real-life applications. This is one of these cases where Ocham's razor's apply. The fact that the change makes the KNN-based calculation faster is a plus as well.\\n \\n3) Extensive experimental results in differensettings are helpful to see the potential of the proposed approach.\", \"weaknesses\": \"1) Since KNN is not an state-of-the-art algorithm in the modern practitioner toolbelt, my understanding of how this is used in real life would be that you would use CKNN-Shapley to clean the data and improve your training set quality and then go from there and use a more high performing algorithm e.g. GBM, XGboost, SVM etc. Is this the case? if it so can you add experiments where this two step is applied and see how this can improve the performance of these widely used algorithms?\\n\\n2) Based on the provided experimental results The CKNN-Shapley performance is better on almost all of the datasets where it was compared to other methods. It would be great to add p values that statistically shoe the significance. \\n\\n3) Minor: indentation is weird in some places e.g line 154. seems like the long Figure 1 caption gets \\\"merged\\\"with the main text.\", \"questions\": \"See 1) in weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer anAQ\", \"comment\": \"We would like to thank the reviewer for the deep analysis of our work, and the time, effort, and consideration. Below, we answer the questions raised by the reviewer:\\n\\n**Abstract** \\nThank you for highlighting this important point. We will revise the abstract and introduction to more clearly specify that our motivation is centered on the inflation issue in KNN-based Shapley value methods. \\n\\n**Link Between Motivation and Proposed Method** \\nIn Section 3, we provide the analysis on the value inflation, which leads to our proposed method. We would like to restate the rationality here. Recall the nature of the KNN classifier, which only makes the prediction based on the neighbors, we hypothesize that value inflation stems from improper subset selection in KNN-Shapley. Certain subsets with only a few samples exhibit significant divergence from the original set, leading to an exaggeration of the contribution of a specific sample on these subsets. This, in turn, gives rise to the phenomenon of value inflation. To address this, our CKNN-Shapley mitigates the negative effects of small-sized training subsets.\\n\\n**$T$ Only for Addressing Misclassified Samples?** \\n$T$ in CKNN-Shapley controls the subsets to be considered in data valuation and mitigates the inflation. By this means, it not only corrects misclassified samples, but also calibrates the values of beneficial samples. See Figure 2, where KNN-Shapley not only has a negative effect on the misidentified samples but also tends to inflate the valuation of most samples.\\n\\n**Could misclassified samples still exist even with $T$ applied?** \\nWhile setting the $T$ value does not guarantee the calibration of all misclassified samples, our experimental results show that CKNN-Shapley significantly reduces inflation effects and the misclassification rate across multiple datasets, indicating that it effectively addresses the issue in most cases.\\n\\n**Systematic assignment of $T$ value** \\n$T$ is a hyperparameter in CKNN-Shapley. Similar to selecting different values of $K$ which impacts accuracy, the choice of $T$ in CKNN-Shapley also varies depending on the dataset. In Figure 3, we explore various $T$ values, demonstrating that an appropriate $T$ setting can effectively improve the accuracy of the KNN classifier. Table 8 provides further details, where we show that setting \\\\( T = N - 2K \\\\) yields optimal results.\\n\\n**Semi-value** \\nSemi-value refers to a family of cooperative game theory methods that satisfy all the axioms of the Shapley value except for the efficiency axiom. This relaxation allows semi-values to allocate rewards based on the marginal contributions of each player while providing greater flexibility in value distribution, which can be beneficial for computational efficiency in certain applications. \\nWe will make sure to include a citation to [1] in future versions to properly reference this theoretical foundation.\\n\\n[1] Dubey, Pradeep, Abraham Neyman, and Robert James Weber. Value theory without efficiency. *Mathematics of Operations Research*, 6(1), 122-128, 1981.\"}" ] }
EXGahWDp1E
Optimization Proxies using Limited Labeled Data and Training Time - A Semi-Supervised Bayesian Neural Network Approach
[ "Parikshit Pareek", "Kaarthik Sundar", "Deepjyoti Deka", "Sidhant Misra" ]
Constrained optimization problems arise in various engineering system operations such as inventory management and electric power grids. However, the requirement to repeatedly solve such optimization problems with uncertain parameters poses a significant computational challenge. This work introduces a learning scheme using Bayesian Neural Networks (BNNs) to solve constrained optimization problems under limited labeled data and restricted model training times. We propose a semi-supervised BNN for this practical but complex regime, wherein training commences in a sandwiched fashion, alternating between a supervised learning step (using labeled data) for minimizing cost, and an unsupervised learning step (using unlabeled data) for enforcing constraint feasibility. Both supervised and unsupervised steps use a Bayesian approach, where Stochastic Variational Inference is employed for approximate Bayesian inference. We show that the proposed semi-supervised learning method outperforms conventional BNN and deep neural network (DNN) architectures on important non-convex constrained optimization problems from energy network operations, achieving up to a tenfold reduction in expected maximum equality gap and halving the optimality and inequality (feasibility) gaps, without requiring any correction or projection step. By leveraging the BNN's ability to provide posterior samples at minimal computational cost, we demonstrate that a Selection via Posterior (SvP) scheme can further reduce equality gaps by more than 10%. We also provide tight and practically meaningful probabilistic confidence bounds that can be constructed using a low number of labeled testing data and readily adapted to other applications.
[ "Optimization Proxy", "Semi-supervised Bayesian Neural Networks", "Constrained Optimization" ]
Reject
https://openreview.net/pdf?id=EXGahWDp1E
https://openreview.net/forum?id=EXGahWDp1E
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xAcUXUUQFg", "vEvbjAY6D8", "ovvwN29KVn", "lABKvTlfN7", "gYm4CEGuux", "eoXi9YEVDF", "eKdvE7oLCH", "ZOp1oUa1kt", "YY9ycygNEJ", "YUyCx7zqMy", "YLi4TIwKlh", "XfoKmtuQ0T", "UvTzo3zwtR", "6Terd7KA5N", "5HwNwTEBoc", "2fvLOYb9Wh", "0Gyu3WhNDW" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1732431644197, 1737523901352, 1732366969739, 1732772265045, 1732744235290, 1730571696715, 1732371724397, 1730443504415, 1729996672860, 1732513193909, 1732370876747, 1732513132479, 1730672394960, 1732366689003, 1732370040111, 1734719867530, 1732371540330 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8325/Reviewer_KQjj" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8325/Authors" ], [ "ICLR.cc/2025/Conference/Submission8325/Authors" ], [ "ICLR.cc/2025/Conference/Submission8325/Reviewer_MUif" ], [ "ICLR.cc/2025/Conference/Submission8325/Reviewer_uTQN" ], [ "ICLR.cc/2025/Conference/Submission8325/Authors" ], [ "ICLR.cc/2025/Conference/Submission8325/Reviewer_KQjj" ], [ "ICLR.cc/2025/Conference/Submission8325/Reviewer_MUif" ], [ "ICLR.cc/2025/Conference/Submission8325/Authors" ], [ "ICLR.cc/2025/Conference/Submission8325/Authors" ], [ "ICLR.cc/2025/Conference/Submission8325/Authors" ], [ "ICLR.cc/2025/Conference/Submission8325/Reviewer_Vowh" ], [ "ICLR.cc/2025/Conference/Submission8325/Authors" ], [ "ICLR.cc/2025/Conference/Submission8325/Authors" ], [ "ICLR.cc/2025/Conference/Submission8325/Area_Chair_UZnP" ], [ "ICLR.cc/2025/Conference/Submission8325/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for your detailed response. While I acknowledge your efforts to address the concerns, I still have some important points to discuss regarding the BNN methodology and experimental validation:\", \"regarding_experimental_design\": [\"While the author mentioned that they compared the proposed approach and baseline models (MAE + penalty and MSE + penalty), it seems an **unfair** comparison. Specifically, the supervised baselines utilize only labeled data for MAE/MSE and penalty calculations, but the sandwich/semi-supervised training leverages both labeled and unlabeled data. So, it is not surprising that BNN has smaller constraint violations. One straightforward and fair baseline is using sandwich/semi-supervised training for regular DNN with (the same) labeled and unlabelled data, as I mentioned in my initial review.\", \"Regarding the computational concerns with power flow equations, I would like to highlight two established approaches from the literature:\", \"Predicting voltage and power generation, with remaining variables solved via Newton's method (e.g., Donti. (2021.)). While computationally intensive, this ensures strict feasibility.\", \"Predicting voltage magnitude and angle, with power generation solved through closed-form calculations (e.g., Huang, W., & Chen, M. (2021)). This offers negligible computational overhead but has a power flow mismatch.\", \"Therefore, as suggested in my initial review, the author should discuss and consider the unsupervised training approach as a baseline, which does not incur significant computational issues from the power flow calculation.\"], \"regarding_the_motivation_of_bnn\": [\"The current justification for BNN usage relies primarily on experimental results. Given the aforementioned concerns about experimental comparisons, a stronger theoretical/emprirical justification is needed.\", \"Specifically: What unique theoretical advantages does BNN offer over DNN for learning **deterministic** mappings?\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Reply to Reviewer uTQN\", \"comment\": \"We thank the reviewer for their comments.\", \"weakness\": \"1. The authors would kindly like to understand the reviewer's perspective on why the proposed work, which introduces a novel semi-supervised learning approach to solve constrained optimization problems in practically significant limited data and time settings, is perceived as an applied work with limited technical novelty.\\n\\n2. Authors would like to know what uncertainty calibration analysis the reviewer is referring to. Figure 3 and subsequent results within the paper provide extensive evidence on the effectiveness of proposed method in providing variance information and variance in error. We have also used a multiplier (2 specifically) to avoid the issues for underestimation of variance, if any, due to the mean-field assumption in variational Bayesian inference.\\n\\n3. We would like to argue that it is standard practice to assume that at least one feasible solution exists for every input when studying machine learning models for constraint optimization problems. All previous works on ML proxies including DC3, Park & Pascal, Zumzum & Kyri etc. work under this assumption. Even in practice, this is not a limiting assumption for ACOPF as power system is operating at a feasible solution at every instance. \\n\\n4. A limitation of this work stems from its motivation that it is targeted to a specific setting of limited training data and time, and does not provide a general method which minimizes violation in general ACOPF proxy settings. Besides, a well known limitation is the high training time requirement by BNN, compared to DNNs. We will update the manuscript in the revision, explicitly mentioning these. We do however believe that despite its higher training complexity BNN is the better choice in the limited data and time setting. \\n\\n\\nWe would like to highlight that the paper already presents a lot of comparisons with existing methods, especially the ones that are relevant to the power systems setting. Additionally, we will update the manuscript with additional discussion and results, which can be found on this link: https://drive.google.com/file/d/1Y57JVPegi2HY2krnLb7qihMvQNdWvfdg/view?usp=share_link \\n\\nAgain, we would like to emphasize that methods like ensembles will require a lot of training time, which is beyond the problem setting of this paper. We will include additional discussion in the introduction to clarify this point.\"}", "{\"title\": \"Reply to Reviewer MUif\", \"comment\": \"We thank the reviewer for their constructive feedback and updated score. We have the following comments related to the remaining queries:\\n\\nIn the context of the labeled data discussion, we want to highlight that, similar to the proposed and supervised learning methods, the unsupervised methods, like proposed by Park & Pascal (2023) also need **validation datasets**. Now, by extending the reviewer\\u2019s argument, let us consider a situation where an unsupervised learning approach is used, and 1512 samples (same as the total training + testing samples in the proposed work) are available for validation and error bounding. Using Hoeffding's bound with $R=1$ and $\\\\delta = 0.05$, one obtains:\\n$$\\n\\\\varepsilon = \\\\frac{0.9}{\\\\sqrt{1512}} \\\\approx 0.02.\\n$$ \\nClearly, this is a very loose bound, as voltage value variations are typically of the order of $10^{-2}$. \\n\\nNow, one can use Empirical Bernstein bounds, leveraging the validation samples to calculate the empirical total variance in error $\\\\widehat{\\\\mathbb{V}}_e$ using all 1512 samples. In this case, the error bound will be: \\n$$\\n\\\\varepsilon = \\\\frac{1.88\\\\sqrt{\\\\widehat{\\\\mathbb{V}}_e}}{\\\\sqrt{1512}} + \\\\frac{5.3344}{1512} = 0.048\\\\sqrt{\\\\widehat{\\\\mathbb{V}}_e} + 0.0035.\\n$$ \\n\\nHere, we want to emphasize that **although Empirical Bernstein bounds have been available in the literature for a long time, they are not commonly used for error-bounding exercises in works related to constrained optimization proxies, neither in supervised nor in unsupervised settings. Thus, one minor yet important contribution of our work is the reintroduction and contextualization of Empirical Bernstein-based probabilistic bounds in optimization proxy settings.** \\n\\nNow, let us compare this with the proposed work's setting, where only 1000 samples are available for validation, along with the hypothesis that $2\\\\text{MPV}$ is an upper bound of the total variance in error and that the second term of the total variance in error is small. (Note that this hypothesis has been stated in the paper, discussed intuitively, and validated with various test cases.) Using the theoretical Bernstein bounds\\u2014**which cannot be used by unsupervised methods as MPV information is only available with BNNs**\\u2014we obtain: \\n$$\\n\\\\varepsilon = \\\\frac{2.88\\\\sqrt{\\\\text{MPV}}}{\\\\sqrt{1000}} + \\\\frac{0.867}{1000} = 0.091\\\\sqrt{\\\\text{MPV}} + 0.0008.\\n$$ \\n\\nBy analyzing Empirical and Theoretical Bernstein-based error values, with 1512 and 1000 validation samples for unsupervised and proposed learning methods respectively, we make the following assertions: \\n\\n**1.** The second term of the proposed method\\u2019s bound is significantly smaller, $0.0008 < 0.0035$, compared to what one can achieve with unsupervised methods. \\n\\n**2.** Once the proposed method achieves MPV values on the order of $10^{-3}$ or lower, the proposed Theoretical Bernstein bound will always be tighter than the Empirical Bernstein bounds achieved by unsupervised methods, even if $\\\\widehat{\\\\mathbb{V}}_e$ is on the order of $10^{-4}$ \\u2014an order of magnitude better than the proposed method. \\n\\nIn view of the extensive testing performed in the paper, we argue that the proposed method can achieve tighter bounds with the same number of total labeled data samples, even if unsupervised methods converge to better solutions. We also want to highlight again that the 10 minutes of training time in the proposed method is very short compared to the 120 minutes required by the unsupervised method presented by Park and Pascal (2023).\"}", "{\"comment\": \"Thank you for your response. I have updated my score accordingly.\\n\\nHowever, I remain unconvinced regarding the constraints of limited labeled data and restricted training time. Recent advances in self-supervised (unsupervised) learning for constrained optimization *(e.g., Park and Hentenryck, 2023; Arya, Rahman, and Gogate, 2024)* address these challenges effectively, as they do not rely on labeled data. In contrast, supervised methods require the generation of optimal solutions for true labels, and the model\\u2019s performance is inherently tied to the quality of these labels. This can introduce additional bottlenecks, particularly in scenarios where high-quality labels are expensive or difficult to obtain.\\n\\nReducing inference time is critical, particularly for real-time applications. However, even under a restricted training time constraint, self-supervised models could still be trained within the available time (**even if not until convergence**) and meaningfully compared with the proposed methods. Furthermore, since self-supervised methods do not require the additional time for label generation, the training time could be reallocated to parameter learning, potentially resulting in improved model performance.\\n\\nFinally, I do not find the constraint of limited labeled training data particularly compelling, as self-supervised methods (Park and Hentenryck, 2023; Arya, Rahman, and Gogate, 2024) do not require labeled data, and, as the authors noted, sampling data is not particularly time-intensive. This suggests that the limited training data constraint could be circumvented by adopting self-supervised approaches.\\n\\nThat said, I appreciate the novelty of incorporating Bayesian semi-supervised learning, the sandwich BNN framework, and probabilistic confidence bounds, which are significant contributions.\\n\\n\\n\\nPark, S. and Hentenryck, P.V. (2023) \\u2018Self-Supervised Primal-Dual Learning for Constrained Optimization\\u2019.\\n\\nArya, S., Rahman, T. and Gogate, V. (2024b) \\u2018Learning to Solve the Constrained Most Probable Explanation Task in Probabilistic Graphical Models\\u2019, in Proceedings of The 27th International Conference on Artificial Intelligence and Statistics.PMLR, pp. 2791\\u20132799.\"}", "{\"summary\": \"This paper proposes a Bayesian neural network-based semi-supervised learning framework for tackling constrained optimization problems. The authors target engineering problems where labelled data and computational resources are limited. The proposed approach alternates between a supervised learning step which minimizes the cost of the optimization problem and an unsupervised learning step which enforces the related constraints. The authors claim that their approach outperforms conventional methods while significantly reducing equality and feasibility gaps.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is relatively well-organised and easy to understand;\", \"The paper offers a fairly well-executed application/combination of Bayesian semi-supervised learning techniques to a realistic real-world problem in constrained optimization;\", \"The Sandwich BNN framework idea is fairly interesting and seems to have direct applicability to general time-sensitive and resource-limited engineering optimization problems\"], \"weaknesses\": [\"As far as I can tell, the work is primarily applied and there is limited technical novelty, which could reduce the community's interest in this paper;\", \"The paper emphasizes the benefits of Bayesian neural networks for uncertainty estimation but offers no uncertainty calibration analysis or comparisons with other viable uncertainty estimation methods. It is important to note that Bayesian neural networks tend to underestimate variance (partly) due to the mean-field assumption;\", \"The assumption that the problem has a feasible solution strikes me as quite strong in the general case, for example in complex highly non-convex settings. I would be interested in hearing the author's thoughts on this;\", \"There is very limited discussion about the drawbacks and/or limitations of the approach.\", \"In my view, the paper could be improved by broadening comparisons with other Bayesian and non-Bayesian uncertainty methods and clarifying and testing the feasibility assumptions. The apparent lack of significant innovation could reduce the impact of the paper, as similar methods (e.g., ensembles or conventional semi-supervised frameworks) could potentially achieve comparable results without the additional complexity of Bayesian inference.\"], \"questions\": \"See above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer MUif (Point 7 onwards)\", \"comment\": \"7. We would like to highlight that the manuscript includes five different state-of-art baselines, for all the systems. Thus, we disagree that baselines were omitted. Also, as noted in the paper and before, self-supervised methods are very time consuming in training (Authors in Park & Van Hentenryck (2023) report that **5932.5 seconds** are taken to train models on 57-bus system and **7605.1 seconds** are taken to train on 118-bus systems.) The primary motivation of this paper is to develop an Optimization proxy under a practical but challenging situation where total labeled data for training as well as total time for training and testing are constrained. Moreover, to strengthen the discussion, we will update the comparative results in the revised manuscript, which can be found here: https://drive.google.com/file/d/1Y57JVPegi2HY2krnLb7qihMvQNdWvfdg/view?usp=share_link\\n8. Yes we agree that we have not accounted for time for generating 512 training samples. However, we do define the total time to be data generation + training time, in which data generation time is constant for all different models. We have not commented on data generation time as we have used open source **torch_geometric** dataset which does not provide generation time. However, in the revised manuscript, we will update the training data generation time using **PowerModels.jl** package. It takes on average 0.15 sec. to solve an instance of ACOPF for 57-Bus and 0.45 sec. for 118-Bus system ACOPF. Therefore, it takes 45.36 sec to generate training + validation set for case57 and 136.08 for case118, on a five core CPU. In practice, this data is can also be available from historical operations. Further, it is clear that these data generation time is much slower than training time of 600 sec. opted in this work.\\n9. Yes, the proposed method requires validation samples to generate the gap and constraint violation. The statement was written to highlight that unsupervised methods are not completely free from labeled data requirements. We understand that this might be misleading, hence we will qualify the statement as follows: ``Moreover, unsupervised methods, similar to supervised and proposed method also require validation data and consequently incur data generation time. This validation is needed to provide confidence bounds on error with respect to true solution.\\\" However, we want to highlight that for the same set of validation data, DNN based models produce trivial confidence bounds due to the use of only Hoeffding's inequality as highlighted in section 4 of the paper. Our method is able to leverage the sampling of weights inside the BNN to obtain better confidence bounds through the use of Bernstein inequality.\", \"on_mistakes\": \"We thank the reviewer for careful reading. We will proof read and revise the manuscript. \\n\\nA standard approach in the context of of mean prediction is written to reflect that we do not opt for weighted mean approach, instead go with most used approach of exception calculation for mean posterior. We will qualify the statements to reflect the same in revised manuscript.\"}", "{\"summary\": \"The paper introduces a semi-supervised learning framework based on BNN to solve constrained optimization problems when labeled data is limited and model training times are restricted.\\nThe authors propose a sandwich-style training approach alternating between supervised and unsupervised learning. Additionally, they employ BNNs to generate multiple predictions and improve feasibility through a selection process. Experimental results demonstrate that BNNs outperform DNNs in settings with low data and limited computational resources.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The work considers a practical and important scenario of solving constrained optimization problems when labeled data is limited and model training times are restricted.\\n2. The authors derive tight confidence bounds for the testing error by utilizing Bernstein's inequality.\\n3. The experiments conducted on the case 2000 demonstrate the scalability of the proposed approach.\", \"weaknesses\": \"1. While the main challenges addressed in this work are limited labeled data and restricted model training times, it is unclear how BNNs specifically contribute to addressing these challenges. The sandwich-style semi-supervised training approach proposed by the authors can be applied to any NN structure.\\n - Training BNNs is computationally more expensive than regular NNs. The authors should discuss it and present the complexity explicitly.\\n - In section 3.1., the authors propose a selection via posterior strategy to reduce the constraint violation. It is unclear what the benefits of BNN are in this strategy since one can add Gaussian noise to regular DNN prediction and select the one with minimum constraint violation. Theoretically, how does the equality constraint violation decrease with the increased generated samples?\\n - In Section 4, the authors derive confidence bound for the testing error using the MPV as a proxy for the TVE in the Bernstein bound. They hypothesize an inequality (line 286) without providing sufficient justification. The authors should explicitly state and discuss the assumptions made in deriving this confidence bound.\\n - The authors should discuss and compare their confidence bound with existing generalization analyses or confidence bounds for DNNs based on the number of training samples, such as those presented in [1].\\n\\n2. The authors propose an alternated training approach where supervised and unsupervised learning are performed in separate iterations. However, the benefits of this approach are unclear. Since one can easily combine the supervised and unsupervised loss together at each iteration, it may save more training time. \\n\\n3. Overall, this work combines sandwich training, BNN, and SvP, so a comprehensive ablation study is needed to evaluate the individual contributions and importance of each component. \\n\\n4. In experiments, the authors only include supervised training methods as baselines, excluding self-supervised and primal-dual learning approaches due to their higher training times and computational demands.\\n - The sandwich training approach can also be applied to regular NNs, serving as a meaningful baseline.\\n - There are efficient unsupervised approaches for solving AC-OPF problems that the authors did not discuss in the related work section or compare against in their experiments [2].\\n\\n[1] Kawaguchi, K., Kaelbling, L. P., & Bengio, Y. (2017). Generalization in deep learning. arXiv preprint arXiv:1710.05468, 1(8).\\n\\n[2] Huang, W., & Chen, M. (2021). DeepOPF-NGT: Fast no ground truth deep learning-based approach for AC-OPF problems. In ICML 2021 Workshop Tackling Climate Change with Machine Learning.\\n\\n\\nI will adjust my scores if the concerns are addressed.\", \"questions\": \"1. How does the number of labeled or unlabeled training data affect the BNN performance?\\n2. How does the number of posterior samples affect the constraint violation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors introduces a semi-supervised Bayesian Neural Network (BNN) framework to efficiently solve constrained optimization problems common in engineering applications like energy networks, where uncertain parameters and limited labeled data pose computational challenges. By alternating between supervised cost-minimization with labeled data and unsupervised constraint satisfaction with unlabeled data, the model achieves a tenfold reduction in equality gaps and halves feasibility and optimality gaps compared to standard BNNs and deep neural networks. Additionally, a novel Selection via Posterior (SvP) scheme leverages BNN uncertainty estimates to further minimize errors, while the framework\\u2019s probabilistic confidence bounds offer a scalable solution adaptable to various high-stakes optimization problems with minimal labeled data requirements.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The strengths of this paper lie in its development of a semi-supervised Bayesian Neural Network (BNN) approach that addresses constrained optimization challenges under strict time and data constraints. First, the choice of BNNs over DNNs enhances uncertainty quantification, facilitating more reliable predictions by integrating prior beliefs, and is novel. The introduction of a novel Sandwich learning method further strengthens the model by enforcing feasibility through unlabeled data, bypassing the need for additional labeled instances. Moreover, the use of predictive variance within the BNN framework enables the formulation of tight expected error bounds.\", \"weaknesses\": [\"The authors should correct the citation formatting throughout the paper, particularly the use of \\\\citet and \\\\citep. For example, the citation of Ibrahim et al. (2020) should be formatted as (Ibrahim et al. 2020) rather than including it in the narrative.\", \"The overall presentation of the study is weak and requires enhancement for improved clarity and impact.\", \"A comparison against methods that require more training time is necessary. Since the neural networks proposed by Park & Van Hentenryck (2023) and others do not necessitate extensive training time, it is crucial to understand the implications of low computational requirements. Although low compute restrictions during test time are typically essential for inference on new examples, we need to train fewer models and thus it may render such restrictions (e.g., a hard limit of 10 minutes of training time on a single CPU core) unnecessary.\", \"What is the justification for including equation 1d as part of the constraints?\", \"The assertion that constructing the feasibility dataset Df\\u200b incurs no additional computational cost should be revisited. While input sampling may indeed be inexpensive, obtaining feasible solutions can be resource-intensive, necessitating a more accurate representation.\", \"The authors should clarify why they did not compare their approach with the primal-dual self-supervised learning method proposed by Park & Van Hentenryck (2023). Additionally, the methods of DC3, and Zamzam & Baker (2020) should be included in the comparison. A new table could have been created to compare the training times of these methods, as it is possible to obtain outputs from these models without fitting them completely.\", \"Completely omitting state-of-the-art baselines does not appear to be a prudent choice. The authors should include the results of these methods while indicating that they require more time and thus cannot be used for direct comparison.\", \"Additionally, the authors have not accounted for the supervised dataset creation time within the 10-minute limit. Since self-supervised methods do not necessitate this step, the current comparison may be considered unfair. The authors should revise the training time to reflect a total of 10 minutes for training plus T minutes for generating true labels for the supervised dataset. This revised training time should then be applied uniformly to both the supervised and self-supervised methods, even when the latter do not require true labels.\", \"Is the statement, \\u201cMoreover, unsupervised methods still require testing data and consequently the associated data generation time in order to perform validation and provide confidence bounds on error with respect to true solution,\\u201d not applicable to the proposed method? Although the proposed method does not require data generation for confidence bounds, it does require labeled training data, which is not needed for self-supervised methods. Additionally, to compute the gap and constraint violations, wouldn\\u2019t the proposed method also need true labels?\", \"The manuscript contains several grammatical mistakes that should be addressed, including but not limited to:\", \"\\\"The fundamental idea of this training method is to update the network weights and biases through multiple rounds of training in which each round alternates between using the labeled dataset.\\\"\", \"\\\"Next, we present results for Probabilistic Confidence bounds, described in Section 4. Figure 3 shows that...\\\"\", \"\\\"In Bayesian Neural Network (BNN) literature, the standard approach is to use the mean posterior prediction Ep\\u200b(w\\u2223D)\\\\[fw\\u200b(xt\\u200b)] for a test input xt\\u200b.\\\" Please clarify the context for \\\"a standard approach.\\\"\"], \"questions\": \"Please refer to the section on weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer KQjj : On Motivation\", \"comment\": \"The key advantage that BNNs\\u2014or Bayesian methods in general\\u2014offer over DNNs and other deterministic methods in any learning problem (not just optimal power flow) lies in their performance under limited data conditions. With a sufficiently large dataset, DNNs or other universal approximation methods should perform equally well, if not better. However, as stated in the Introduction, the realistic regime for power grid optimization problems involves limited data and constrained training time. Another perspective when comparing BNNs and DNNs is that BNNs can separate two types of uncertainty **Epistemic uncertainty**, which reflects the uncertainty in model parameters ($p(w|D)$, where $w$ represents the parameters and $D$ is the training data). **Aleatoric uncertainty**, which represents the inherent uncertainty in the data itself ($p(y|x, w)$, where $y$ is the output and $x$ is the input). This separation allows BNNs to be highly data-efficient, enabling them to effectively learn from small datasets without overfitting [R1\\u2013R2]. During prediction, instead of confidently producing an incorrect result, BNNs assign high epistemic uncertainty to points far from the training dataset, signaling that the model lacks sufficient knowledge about them. This eliminates the need to rely on **Ensemble DNNs** to account for weight distributions and build model robustness. Consequently, BNNs save significant computational resources that would otherwise be required to train multiple DNN models to create an ensemble.\\n\\nAdditionally, we have demonstrated that the inherent randomness in BNNs allows for the rapid development of much better confidence bounds on predicted outputs compared to DNNs on similarly sized test datasets. This is primarily because the mean predicted variance of a BNN better approximates the true variance of outputs than the empirical variance computed by DNN models. While this observation is still empirical, our results confirm its validity across a range of power grid networks, including large networks that have not been addressed in existing DNN-based studies in this domain. Assuming that the variance predictions are reliable, we show that our model reduces confidence error scaling from $1/\\\\sqrt{M}$ to $1/M$, where $M$ is the number of test samples. \\n\\nFinally, we argue that the proposed work offers a novel perspective on ACOPF and optimization proxies. The ability of BNNs to quantify uncertainty in predictions facilitates effective active learning and Bayesian Optimization [R3]. This work lays the foundation for using BNNs as surrogates in various engineering problems, where active learning and/or Bayesian Optimization can be employed to sample more informative datasets. Therefore, this study should not only be evaluated from a state-of-the-art standpoint but also as a contribution that opens new directions in the field. \\n\\n\\n**We will ensure that the manuscript is updated with these discussions and additional details.**\\n\\n[R1] Laurent Valentin Jospin et. al., \\n\\\"Hands-on Bayesian neural networks\\u2014A tutorial for deep learning users.\\\" IEEE Computational Intelligence Magazine 17, no. 2 (2022): 29-48\\n\\n[R2]Stefan Depeweg et. al. \\\"Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning\\\", Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1184-1193, 2018.\\n\\n[R3] Yucen Lily Li et. al. \\\"A Study of Bayesian Neural Network Surrogates for Bayesian Optimization\\\" ICLR 2024.\"}", "{\"comment\": \"We thank the reviewer for their comments.\", \"point_1\": \"Reviewer is correct to say that semi-supervised training approach proposed in this work can be applied to any NN. However, we want to argue that with limited training data, A BNN is better suited for learning ACOPF than NNs and DNNs. This argument can be corroborated from results presented in the paper where simple BNN always outperforms a NN with both MSE and MAE loss function (Comparative result Table 1 and 2 in Section 5 and Table 6 and 7 in Appendix C). Moreover, when using NN we do not have access to predictive uncertainty and thus cannot use the superior theoretical Bernstein inequality for error bounds.\\n\\n-- Yes, training BNN is more expensive than regular NN. However, in the present works setting, the training is constrained by the time and number of data points available are very limited. In such settings, the BNN outperforms NNs as shown in the results. We have adopted a setting motivated by practical considerations where there is a hard upper bound on the total time available. This is a different setting where the theoretical complexity of training may not be as relevant compared to what is the best that can be done in limited time. Please find details of motivation and additional results here: https://drive.google.com/file/d/1Y57JVPegi2HY2krnLb7qihMvQNdWvfdg/view?usp=share_link\\n\\n-- Firstly, we want to highlight that for selection via posterior strategy, reduction of the constraint violation is only one of the criteria. One can easily design a similar strategy for minimum cost, minimum feasibility error or a weighted combination of both. BNNs provide a principled way to quantify the uncertainty in the predicted output, which is not the same as adding arbitrary noise to NNs. In BNNs, the weights are probabilistic and each weight of the network has a non trivial posterior distribution which is updated during training. The distribution of the output is then a consequence of the weight distribution. The significantly larger probabilistic search space afforded by the weight distributions (often bimodal in our case) can lead to an output distribution that very different from Gaussian, even when the weight distributions started with a Gaussian prior. Therefore, using Gaussian noise to the output in NNs will not correspond it the same thing. For uncertainty quantification through NNs, a prominent approach is to use Ensemble of NNs, which provide output distribution via multiple possible weights. However, training large number of NNs require a lot of training time+ resource along with variations in training dataset. Thus, in the current works limited time and data setting it won't be feasible and BNN based approach is the preferred method both for accuracy and confidence bound generation. \\n\\n-- The hypothesis of using MPV as a proxy for TVE is motivated by our empirical observations. For ACOPF proxies, we observed that two times MPV is always greater than the total variance in error. Furthermore, as explained from line 294 onward, the first term of the total variance in error is the same as MPV. If the second term of the total variance (expectation over weights and variance over samples) is lower than MPV, our hypothesis holds. For ACOPF, we observed that the second term of TVE (Equation 5) is significantly lower than MPV (the first term of TVE). Figure 3 has been included to validate the hypothesis for various power system test cases. Note that Figure 3 shows results for both supervised and sandwich BNNs, covering a total of six instances where the hypothesis holds.We have also observed the same characteristic of MPV in other test cases. We will clarify and update the hypothesis discussion in the revised manuscript.\\n\\n-- Our confidence bound is similar to Proposition 5 in [1]. In the premise of Proposition 5, an absolute bound on the error (C) and on the second moment ($\\\\gamma^2$) is assumed and the result is derived using the theoretical Bernstein inequality. In our setting, by using the automatically generated MPV from the training phase of BNNs along with hypothesis in line 286 or Eq 5 (clarified in our response to the previous question) we are able to obtain a similar confidence bound. Note that in either case, this bound is to help quantify the accuracy over i.i.d. out of sample test data. Section 4 in our paper is not a attempting to generalize the training accuracy, that is complicated due to dependence between model and the training samples. We will cite [1] in the revised document.\", \"title\": \"Reply to Reviewer KQjj\"}", "{\"title\": \"Reply to Reviewer KQjj on Experimental Results\", \"comment\": \"We would like to highlight three key points in light of the **experimental results**:\\n\\n1. **Superiority of Supervised BNNs** \\n The experimental results clearly show that a simple supervised BNN significantly outperforms DNN models. We would like to emphasize that our contribution includes using BNNs to create ACOPF proxies, not just the sandwich learning model. Based on the results comparing supervised BNNs and supervised DNNs, we argue that BNNs are more suitable for learning proxies under low training data settings. This empirical, results-based assertion aligns with findings in various BNN studies [R1-R2]. \\n\\n2. **Cost of Unsupervised Training** \\n We want to stress that unsupervised training is computationally expensive. While some papers do not explicitly report training times, the state-of-the-art unsupervised learning work [Park and Pascal] indicates that primal-dual learning (Park & Van Hentenryck, 2023) requires significantly longer training times\\u2014for example, **5,932.5 seconds** for the 57-bus system and **7,605.1 seconds** for the 118-bus system. In contrast, our method trains within **600 seconds**, making their training times approximately 10 times longer. For this reason, we did not use such methods for benchmarking. \\n\\n3. **Projection Using Power Flow** \\n Achieving a zero equality gap requires solving nonlinear power flow equations, as done in DC3 and by Zumzum & Baker. However, the method used by Huang, W., & Chen, M. (2021) does not achieve a zero gap. After predicting voltage and angle, the $P_g$ and $Q_g$ values (obtained from equations (2) and (3)) are fixed, leaving no degrees of freedom to satisfy nodal power balance. This limitation is reflected in their results, where load satisfaction is less than 100%. However, the nodal balance violation (i.e., equality gap) information is not explicitly provided. \\n\\n Additionally, the equation-solving method proposed by Huang, W., & Chen, M. (2021) is directly applicable to our proposed method. It could even enable an SvP-style mechanism to derive optimal weights through linear equation solving. \\n\\nLastly, while we agree that a sandwich DNN could serve as a baseline for BNN models, it would constitute a distinct contribution. This is because, with DNNs, it is crucial to explore merging supervised and unsupervised layers, which is not feasible with BNNs, as highlighted in our earlier response. \\n\\n[R1] Laurent Valentin Jospin et. al., \\n\\\"Hands-on Bayesian neural networks\\u2014A tutorial for deep learning users.\\\" IEEE Computational Intelligence Magazine 17, no. 2 (2022): 29-48\\n\\n[R2]Stefan Depeweg et. al. \\\"Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning\\\", Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1184-1193, 2018.\"}", "{\"summary\": \"The paper proposes Bayesian Neural Networks (BNNs) for solving the OPF problem under uncertain demand. It employs a novel semi supervised training procedure that reduces the dependence of the training to labelled input-output pairs. The study also suggests using Bernstein bound with Mean Predictive Variance to assess the BNN model performance on out-of-sample inputs.\", \"the_authors_consider_two_bnn_training_methods\": \"a supervised approach using labelled data and a semi-supervised approach, or sandwich learning, which alternates between supervised and unsupervised loss functions iteratively. The latter enforces feasibility by utilizing a function that takes the value 0 if the constraints are satisfied.\\n\\nThe model was applied to 57-, 118-, 500- and 2000-bus test cases and compared against 5 alternative methods in the literature.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1.\\tThe main advantage is that the BNN seem to have some performance advantages over DNNs for small and large systems even with training datasets as small as 512 observations. While the optimality gaps are close among all methods, there are notable power balance gap differences between BNN and DNN methods.\\n2.\\tIn case of large systems such as 2000-bus system, with 512 observations, while DNN training fails, all BNN models still able to perform well. This is particularly useful when it is computationally expensive to generate datasets.\\n3.\\tAnother contribution is using unsupervised loss term in the training loop, which can work like regularization term, focusing the attention of training to model parameters that would generate more realistic (feasible) solutions. The sandwich training that alternates between supervised and unsupervised data is akin to Physics Informed Neural Network training proposed by [1]. While PINN training contains all KKT conditions in one loss function, the proposed Sandwich learning alternates between two loss functions.\\n\\n[1] Nellikkath, Rahul, and Spyros Chatzivasileiadis. \\\"Physics-informed neural networks for minimising worst-case violations in dc optimal power flow.\\\" 2021 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm). IEEE, 2021.\", \"weaknesses\": \"1.\\tThe optimality gaps among different methods are very close for 118- and 500-bus cases. The only exception is 57-bus system where sandwich learning shows some improvement. It looks the only consistent advantage is less power balance violations.\\n2.\\tIt is not clear if sandwich learning improves BNN model performance. The differences among proposed BNN approaches seem marginal and is not consistent among all test cases.\\n3.\\tIf I understand correctly, the DNNs in the study are also trained with 512 data points and the training was terminated after 600 seconds (please correct me at this point if I missed the relevant part of the script). My main concern is that training DNNs (with two hidden layers and n_hidden = 2 x input size) with 512 observations is not fair. DNNs are universal function approximators with number of linear facets are determined by hidden layer depth and width [2] and the training must be carried out with dense enough datasets for a good estimation especially in case of larger systems. I understand this is one of the advantages of using BNNs but larger training datasets can still be feasible to construct. \\n4.\\tIt is not surprising that in case of 2000-bus system, the training would fail with a small dataset and short training time because the NN model (with hidden layer width of 2 x input size) has too many trainable parameters. The authors could find the required training size for DNNs to perform as well as BNNs as a proof of scalability of the latter.\\n\\n[2] Montufar, G. F., Pascanu, R., Cho, K., & Bengio, Y. (2014). On the number of linear regions of deep neural networks. Advances in neural information processing systems, 27.\", \"questions\": \"1.\\tHow many data points were used to train the DNN models?\\n2.\\tI wonder if the authors tried to use Sandwich learning to train DNNs. As it reduces the dependence on labelled datasets, it can be a good improvement to scale up to larger systems with limited observations. \\n3.\\tSome papers in the literature call a shallow NN a model with two hidden layers i.e. input -> hidden and hidden -> output. Others refer shallow NNs as one layered models. If it is the case of the former, to name it as DNN it must have more than two layers. I wonder which is the case in this study. If a shallow NN was trained, then it deeper models can have better advantages.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer Vowh\", \"comment\": \"We thank the reviewer for their thoughtful comments.\\n1. Yes, our primary focus was on feasibility without compromising on optimality, in learning time and data constrained settings. The primary motivation of this paper is to develop an Optimization proxy under a practical but challenging situation where total labeled data for training as well as total time for training and testing are constrained, as shown in Figure 1 on this link: https://drive.google.com/file/d/1Y57JVPegi2HY2krnLb7qihMvQNdWvfdg/view?usp=share_link\\nThis is different from situations where a large number of training samples can be generated or much longer training time $T_{training}$ for model tuning and testing time $T_{prediction}$ for prediction are available. Technically the number of training data is itself a proxy for data generation time $T_{Data}$. In this context, note that $T_{prediction}$ for our model is at least **10 times lower** than the DC3 approach or Zumzum & Baker's approach because our model does not involve a projection step that requires solving power flows or access to a power flow solver during either training or testing. When compared to models that have fast testing times, our results show up to an order of magnitude improvement in feasibility gaps, without compromising the optimality of predictions. E.g.: the optimality gap reduces to 0.089 by proposed method from 1.284 best point prediction method for case118 in Table 2. Second, our proxy, due to the use of BNN, is able to generate non-trivial confidence bounds on the accuracy of prediction despite a limited number of 1000 validation data. As shown in Figure 4, prior DNN based models when using the same validation dataset, and Hoeffding's or Empirical Bernstein inequality, do not produce meaningful performance bounds.\\n\\n2. We emphasize that our contributions extend beyond sandwich learning BNNs to include simple BNNs for constrained optimization and Bernstein inequality-based error guarantees. The results demonstrate that BNNs are well-suited for data- and time-constrained environments. For smaller power grids, the sandwich model effectively reduces power balance violations, achieving its intended goal. However, for larger systems, limited training time prevents the model from fully utilizing available information. Nonetheless, to maintain practicality, we uniformly restrict training to 10 minutes for all examples.\\n\\n3. Yes, the DNN training used 512 samples and was capped at 600 seconds. The goal was to compare BNN and DNN performance under low data and limited training time conditions. While larger datasets could be constructed, the time required increases significantly with system size, as noted in [Peng-Pascal paper]. More critically, this added data creation time is a major drawback in power system operations, where datasets are often unavailable or difficult to obtain, especially for N-1 line outage scenarios. This setting aligns closely with the motivation of the proposed work.\\n\\n4. Yes, the DNN model for 2000-bus has too many trainable parameters and is failing because of the very limited data and training time. Also, we agree that with more data, and training time, DNNs will start to perform better and at some point exceed the BNN performance (since DNNs can be trained more efficiently compared to BNN). However, we want to again highlight that our motivation is not to say that BNNs are better than DNNs in general. Our limited submission assertion is that-- under data and time constrained settings BNN based models outperform the DNN models for ACOPF problem, as described in motivation before.\", \"questions\": \"1. We used 512 labeled data points for DNN training across all cases, as outlined in Section 5. \\n\\n2. While this work focuses on BNNs we agree that sandwich learning has potential applicability for DNNs, as it could reduce reliance on labeled datasets. However, we wish to emphasize that using a DNN alone can only provide Hoeffding\\u2019s and Empirical Bernstein bounds on the expected error, as discussed in Section 4 of the paper. An intriguing direction, without losing the Bayesian nature of predictions, would be to use a DNN exclusively for the unsupervised part of training. However, this approach would require substantial effort in designing sequential priors for the supervised BNN stages that follow the unsupervised DNN stages. We plan to explore this direction in future work.\\n\\n3. It is standard practice to train NNs with two hidden layers for the ACOPF problem. Almost all existing works adopt a two-hidden-layer architecture. An intuitive explanation for this stems from the fact that power flow equations are quadratic in nature, so two or three nonlinear transformations should suffice to capture the input-output relationship. While it cannot be ruled out that deeper networks might offer better performance, they are much harder to train within the constrained training time setting of the proposed method due to the significantly larger number of parameters.\"}", "{\"title\": \"Reply to Reviewer MUif\", \"comment\": \"We thank the reviewer for their comments.\\n\\n1. We have used the bibliographystyle\\\\{iclr2025\\\\_conference\\\\} for citations and will re-check and update the reference format as per guidelines.\\n2. We would like to request the reviewer to expand more on this to specify what improvements can be made by the authors for the reviewer to adjust the score. Additionally, we have decided to update the manuscript with one motivation diagram and some additional results on different dataset comparing with other state of art methods like DC3. Detailed results, to be updated in the revised manuscript, can be found on: https://drive.google.com/file/d/1Y57JVPegi2HY2krnLb7qihMvQNdWvfdg/view?usp=share_link\\n3. Our focus was on ensuring feasibility without compromising optimality in settings constrained by limited training data and time. The goal was to develop an optimization proxy for practical yet challenging scenarios where labeled data and total training and testing times are restricted (see Figure 1: (https://drive.google.com/file/d/1Y57JVPegi2HY2krnLb7qihMvQNdWvfdg/view?usp=share_link)). This differs from scenarios with abundant training samples or extended training ($T_{\\\\text{training}}$) and testing ($T_{\\\\text{prediction}}$) times. Notably, $T_{\\\\text{prediction}}$ for our model is at least **10 times lower** than approaches like DC3 or Zumzum & Baker's, as it avoids projection steps requiring power flow solvers during training or testing. Additionally, our results show up to an order of magnitude improvement in feasibility gaps without compromising optimality. For example, the optimality gap for case118 reduces to **0.089** with our method, compared to **1.284** for the best point prediction method (Table 2). Our BNN-based proxy also generates meaningful confidence bounds on prediction accuracy despite using only 1,000 validation samples. As shown in Figure 4, prior DNN-based models, even with the same validation data and using Hoeffding's or Empirical Bernstein inequalities, fail to provide meaningful bounds. Self-supervised methods like primal-dual learning (Park & Van Hentenryck, 2023) require significantly longer training times, e.g., **5,932.5 seconds** for the 57-bus system and **7,605.1 seconds** for the 118-bus system. In contrast, our method trains within **600 seconds**, making their training times 10 times longer. For this reason, we did not use such methods for benchmarking. \\n4. We included 1d to be explicit about the input to the optimization problem. We will move it to the paragraph below equation (1) in the revised manuscript.\\n5. We respectfully disagree with the reviewer\\u2019s concern regarding the computational intensiveness of constructing the feasibility dataset $D_f$. The process of creating $D_f$ is, in fact, computationally light because it relies solely on the necessary conditions for feasibility in constrained optimization problems. For any feasible solution to a constrained optimization problem, the constraints must be satisfied exactly\\u2014resulting in zero constraint violations or \\\"gap\\\" as defined in equation (2). Thus, by definition, the feasibility dataset $D_f$ is constructed such that each entry in $D_f $ corresponds to a constraint satisfaction value of zero. Specifically, given any input $ \\\\mathbf{x}$ in $ D_f $, the output is identically zero, indicating that all constraints are met. Therefore, constructing $ D_f $ is as computationally inexpensive as simply sampling inputs, since there is no additional cost for evaluating or solving complex constraint satisfaction conditions. The feasibility requirement simplifies the dataset creation process, ensuring that the computational load remains minimal. This part is mentioned in paragraph below equation (2) of the manuscript.\\n6. As mentioned before, the proposed works motivation is to learn an ACOPF proxy under limited training data and training time situations. Therefore, methods which takes considerably large amount of time to train (approx. 6000 sec. for by Park & Van Hentenryck (2023) for 57-Bus system) does not fit within the motivation of this work. Moreover, an important difference between methods such as DC3 and Zamzam & Baker (2020) is presence of power flow in the pipeline during training and prediction. The inclusion of power flow implies that we need to solve nonlinear equations within the proxy to obtain accurate predictions. This leads to considerable increase in both training and prediction time. For example, in DC3 paper it is listed that it takes 0.089 sec. to predict the ACOPF solution for one instance, which is approximately 30 times higher than 0.003 sec prediction or inference time of the proposed method. This limits the use of use of these models in stochastic settings such as probabilistic risk quantification where a large number of (order of millions) predictions are needed.\"}", "{\"metareview\": \"**Summary**\\n\\n\\nThe paper introduces a semi-supervised Bayesian Neural Network (BNN) framework designed to tackle constrained optimization problems, particularly the optimal power flow (OPF) problem with uncertain demands and limited labeled data. The approach alternates between supervised learning, which minimizes the optimization problem's cost, and unsupervised learning, which enforces related constraints, referred to as sandwich learning. This method significantly reduces feasibility and optimality gaps by employing a novel training procedure that decreases reliance on labeled data. The authors also implement a Selection via Posterior (SvP) scheme that uses BNN uncertainty estimates to enhance model performance and ensure feasibility. Experimental results across various test cases, including 57-, 118-, 500-, and 2000-bus networks, demonstrate that this BNN framework outperforms traditional deep neural networks (DNNs) and other conventional methods, especially in scenarios characterized by sparse data and computational resource constraints.\\n\\n**Strengths**\", \"the_reviewers_unanimously_highlighted_several_strengths_of_the_proposed_framework\": [\"The paper is well-organised and easy to follow.\", \"The use of BNN seems to have some performance advantages over DNNs for small and large systems even with training datasets as small as 512 observations.\", \"The proposed unsupervised loss term in the training loop, and the sandwich training scheme used in the paper are interesting and add practical value to the proposed solution.\", \"**Weaknesses**\", \"Several core weaknesses was brought up by the reviewers. These include:\", \"The paper's representation could be enhanced.\", \"The lack of comparison with more recent relative baselines, including self-supervised and primal-dual learning approaches due to their higher training times and computational demands. The paper could benefit from reporting such baselines and their corresponding training times.\", \"While this work focuses on addressing the challenges of limited labeled data and restricted model training times, it is not clearly explained how Bayesian Neural Networks (BNNs) specifically contribute to overcoming these challenges.\", \"Limited discussions on limitations of the proposed solution.\", \"**Conclusion**\", \"The majority of reviewers recognize the importance of the problem addressed by the paper but criticize the experimental setup and the paper's inadequate positioning, including a lack of rigorous comparisons with recent baselines. Despite a substantial rebuttal from the authors, this did not alter the predominantly negative perception held by the reviewers. I agree with the reviewers and vote for rejection of the paper.\"], \"additional_comments_on_reviewer_discussion\": \"Despite my efforts to engage the reviewers in a discussion during the review period to reach a consensus on the paper\\u2019s merits and shortcomings, there was no participation in any discussion. However, given the less polarized evaluations of this paper, the discussion was not as critical for this work.\"}", "{\"comment\": \"Point 2: We want to clarify that one cannot combine supervised and unsupervised losses in Bayesian Neural Networks (BNNs) without having a tight lower bound on the cost function with unlabeled input data. This is because BNNs are trained using the Bayesian inference principle, where a likelihood function needs to be defined for the output. In the unsupervised learning stage, the proposed method relies on the fact that the observation for the feasibility gap will be zero for any input. This allows the definition of a likelihood for constraint satisfaction that attempts to achieve a distribution centered around zero (although a perfect delta distribution is not achievable due to numerical issues). To construct a similar likelihood for the cost (as done in the supervised training phase), we require tight bounds on the cost for unlabeled data.\\n\\nThe setting of traditional NNs is different, as one can define a loss function that combines cost and weighted feasibility terms. These NNs are often referred to as Physics-Informed Neural Networks (PINNs). We have compared our proposed approach with these models (MAE + penalty and MSE + penalty) across various test cases presented in the paper. The results show that the proposed BNN approach outperforms these PINNs.\", \"point_3\": \"The section 5 of the manuscript provide separate contribution of each of the components. For example, first row of table 1 shows improvement via Sandwich BNN SvP compared to the Sandwich BNN where both cost function and feasibility gap improves. Similarly, comparing the Sandwich BNN and supervised BNN results indicate benefits of Sandwiching.\\n\\nWe will update the discussion on the revised paper to highlight contribution of each part separately.\", \"point_4\": \"As mentioned in the manuscript, the proposed work's motivation is to learn an ACOPF proxy under limited training data and training time situations. Therefore, methods which takes considerably large amount of time to train (approx. 6000 sec. for by Park & Van Hentenryck (2023) for 57-Bus system) does not fit within the motivation of this work. Moreover, an important difference between proposed approach and methods such as DC3 and Zamzam & Baker (2020) is presence of power flow in the pipeline during training and prediction. The inclusion of power flow implies that we need to solve nonlinear equations within the proxy to obtain accurate predictions. This leads to considerable increase in both training and prediction time. For example, in DC3 paper it is listed that it takes 0.089 sec. to predict the ACOPF solution for one instance, which is approximately 30 times higher than 0.003 sec prediction or inference time of the proposed method (with 100 weight samples). This limits the use of use of these models in stochastic settings such as probabilistic risk quantification where a large number of (order of millions) predictions are needed.\\n\\nDetailed results, to be updated in the revised manuscript, can be found on : https://drive.google.com/file/d/1Y57JVPegi2HY2krnLb7qihMvQNdWvfdg/view?usp=share_link\\n\\nSimilar to methods such as DC3 and Zamzam & Baker (2020), the work Huang, W., & Chen, M. (2021) also have power flow in the pipeline during prediction. The inclusion of power flow implies that we need to solve nonlinear equations within the proxy to obtain accurate predictions. This leads to considerable increase in both training and prediction time. For example, in DC3 paper it is listed that it takes 0.089 sec. to predict the ACOPF solution for one instance, which is approximately 30 times higher than 0.003 sec prediction or inference time of the proposed method (with 100 weight samples). This limits the use of use of these models in stochastic settings such as probabilistic risk quantification where a large number of (order of millions) predictions are needed.\", \"questions\": \"1. We have observed that the proposed method also follows the general principle where the prediction error decreases with increase in training data. However, when the training time is limited, increasing the number of training samples beyond a certain threshold does not provide additional improvements. In our experiments, we observed that for systems larger than 118-Bus, increasing training samples beyond 1000 does not provide further benefits when the training time is limited to 600 sec. We will append these results to the revised manuscript.\\n\\n2. The effect of number of posterior samples on output is presented in the Figure 5, Appendix C. It shows that errors in various parameters stabilizes around 200 posterior samples.\", \"title\": \"Reply to Reviewer KQjj (Point 2 Onwards)\"}" ] }
EWiWMoynco
NAQ: Nonlinearity-Aware Quantization
[ "Jonathan S. Lew", "Tor M. Aamodt" ]
Transformer-based large language models and vision transformers have achieved remarkable performance, but at a high energy cost. Nonlinearities (e.g., GELU, softmax) have regions where the magnitude of the gradient is small, which means that errors in pre-nonlinearity inputs result in small output error. We propose Nonlinearity-Aware Quantization (NAQ), which involves computing the FC layer outputs and attention scores at low precision, predicting the magnitude of the gradient of the nonlinearity, and recomputing the pre-nonlinearity if the gradient magnitude is large. With future hardware support, models with NAQ would avoid up to 62% of full precision pre-nonlinearity computation and it would achieve up to 29% reduction in energy consumption, with small effects on model performance.
[ "transformer", "energy", "quantization", "activation function", "gelu", "softmax", "llm", "vision transformer" ]
Reject
https://openreview.net/pdf?id=EWiWMoynco
https://openreview.net/forum?id=EWiWMoynco
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zVMdRcqPp5", "pZqknxeqXN", "cQZZTLUmC6", "PvI1mSJsBi", "N7ERJsvcrf", "LQZeUp0Zvy" ], "note_type": [ "official_review", "official_review", "decision", "meta_review", "official_review", "official_review" ], "note_created": [ 1730692357869, 1730300520963, 1737524142593, 1734532966130, 1730238480700, 1730083330227 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11731/Reviewer_GAVm" ], [ "ICLR.cc/2025/Conference/Submission11731/Reviewer_HQ5k" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11731/Area_Chair_AZZw" ], [ "ICLR.cc/2025/Conference/Submission11731/Reviewer_FhZz" ], [ "ICLR.cc/2025/Conference/Submission11731/Reviewer_Uw9k" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposed a quantization scheme for attention score and FC1/pre-activation computation in transformers, targeting resource-constrained or energy-consumption-sensitive applications. Basic flow is to perform INT4 computations of the two components mentioned above, use the results and the gradient function as estimators for quantization errors, and then perform a partial re-computation in full precision for those elements with higher risks. Based on authors estimate, OPT350m (GPTQ) may have a chance to save ~30% of computation energy while maintaining acceptable model accuracy. ViT (INT8) on ImageNet on the other hand may only have room for ~10% of energy saving.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. Well written and easy to follow.\\n2. Reasonable amount of experiments to demonstrate and support the proposed method.\", \"weaknesses\": \"1. Data access/movement energy consumption was not taken into account.\\nThe energy consumption discussed in Fig. 3/5/6/7/8 are based on \\\"computation cost\\\", where the potential energy saving from FP32 to INT4 is estimated from reference Horowitz2014. But in the same reference it also shows that the cost for memory access is ~10-100x higher than mul-add computation. This missing piece is critical because the proposed method, as illustrated in Fig. 4, may need do either a on-the-fly quantization from FP32 weights to INT4 weights (which will require mem allocation) or hold 2 sets of weights (one for INT4 and one for FP32) for re-computation purpose. At least the model/kernel will need to do additional (GPU DRAM) access N time, where N equals to the number of elements being recomputed. Furthermore, for generative LLM it is very common that the inference is memory-bound instead of compute-bound, which means the memory access cost would very likely dominate the total energy consumption. This is exactly the reason why GPTQ inference can be done in FP16 but still achieve close to 4x speed-up, as the memory access cost is greatly reduced by employing INT4 weights. \\n\\n\\n2. Overhead of sparsity-like HW implementation.\", \"the_author_suggests_that_the_partial_re_computation_could_be_performed_in_a_similar_way_to_2\": \"4 structural sparsity. However, the author also acknowledges that this sparsity approach will require \\\"mask tensors\\\" of the same size as the input tensors to matmul engine, which will incur additional data movement cost. One should note that these two mask tensors related cost is adding on top of the data accessing cost mentioned above. This again highlights the importance of including data accessing cost in the energy consumption analysis.\", \"questions\": \"Please address the concerns mentioned in Weakness above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a novel quantization framework determining whether to quantize FC layers and attention scores dynamically by relying on gradient information from non-linearities.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The NAQ method leverages the behavior of nonlinear activation functions (specifically, regions of low gradient magnitude) to recompute pre-nonlinearity values, minimizing full precision calculations conditionally. This approach of dynamically deciding whether to quantize or not based on gradient magnitude is interesting.\\n2. The paper provides a sound theoretical analysis of the proposed framework, highlighting how it differs from existing approaches such as SqueezeLLM.\", \"weaknesses\": \"1. While NAQ is evaluated on both language and vision models, comparisons with baseline methods are limited, with older large language models (LLMs) and baselines chosen for evaluation.\\n2. The framework relies on hyperparameters such as quantization thresholds per element. These threshold values determine when to recompute at full precision, but the paper could provide more explicit guidance on setting these values for various use cases or model types.\\n3. The lack of compatible hardware limits the practical implementation of the proposed method. Additionally, while the authors advocate for improved hardware support for sparsity, this does not directly correlate with the proposed method and could be seen as tangential to the paper\\u2019s focus. Lastly, the framework introduces additional latency in model inference due to the dynamic quantization strategy.\", \"questions\": \"1. Can you share your approach for selecting quantization thresholds, especially for different models?\\n2. Are there plans to evaluate NAQ on newer models (such as LLama) or against more recent quantization approaches (such as SqueezeLLM)?\\n3. Can you improve the hardware implementation section with possible directions for implementing such a framework on hardware?\\n4. Can you evaluate NAQ with different bit-width combinations for both weights and activations? Are there any limitations?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The paper presents a promising approach to reducing the energy footprint of transformer models through nonlinearity-aware quantization. All reviewers have provided consistently negative ratings, regarding the practical implementation of NAQ, the accuracy of energy consumption estimates, the lack of consideration for data movement costs, the need for more explicit guidance on hyperparameter settings etc. The authors did not provide a response to address these issues. The final consensus of negative ratings lead to a rejection for this submission.\", \"additional_comments_on_reviewer_discussion\": \"The authors did not provide a response to address the original concerns from reviewers. All reviewers remain the original ratings.\"}", "{\"summary\": \"This paper proposes a methodology (Nonlinearity Aware Quantization - NAQ) to compute some transformers pre-linearity operations at low precision, with the aim of limiting inference energy consumption. It proposes to leverage regions where the non-linearity gradient is small to perform low precision computations, while recomputing the operations that require high-precision. The targeted operations are the query/key attention product and the MLP FC1. Results are reported on a selection of models (BLOOM, OPT, DeiT, ViT), showing the trade-offs between number of computations avoided (and related energy savings) and perplexity/accuracy.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The topic of improving energy consumption of transformers architectures at inference is of clear interest\", \"Results show that a significant fraction of computations (in the two targeted operations) can be performed at INT4 with limited performance degradation\"], \"weaknesses\": [\"The main limitation is a lack of hardware demonstration of the proposed concept. Hence, the authors estimate energy consumption from the cost of individual Multiply-Accumulate operations at varying precision (FP16, INT8, INT4). However, using these numbers are not convincing as hardware overheads can vary for different data type and play a significant role\", \"Estimated advantage against INT8 is limited. As 8-bit formats (FP8, INT8) are becoming more popular along with expanded hardware support, the relevance of the proposed technique diminishes\", \"As recognized by the authors, this methodology adds inference overhead to latency, so would be only applicable to energy-constrained scenarios\", \"A key claim of this paper is that some pre-nonlinearity computations can be computed in low precision due to the small gradient of the linearity over some regions of their inputs. For the Query/Key attention computation, the authors use the low-precision quantization product (QP) as a predictor for the gradient, and compare against a manually-selected threshold to determine whether computation in higher precision is needed. However, the correlation between QP and linearity gradient is not established, only briefly mentioned\", \"The algorithm is not clear. It is stated that the QK Quantized Product is compared element-wise to a threshold. Does *any* QP value above the threshold triggers the high-precision QP recomputation? Would be helpful for the algorithm to be properly spelled out\"], \"questions\": \"1. can the authors comment on their energy consumption estimates and projected energy savings, especially when compared to 8-bit formats?\\n2. expand on the correlation between QP and MLP FC1 non-linearity gradient\\n3. clarify the algorithm insofar re-computation is concerned\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper focusses on a very important issue related to quantization and proposes a solution with a hypothesis that magnitude of gradient is small for nonlinearities present after activation in MLP layer and attention scores. The Authors propose an interesting method called Nonlinearity-Aware Quantization (NAQ) by computing the FC layer outputs and attention scores at low precision and recomputing the pre-nonlinearity if needed (magnitude of gradient is high) by adding a conditional branch that decides whether re-computation is needed or not. Results are discussed with an estimation of energy saving calculated using prior work.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is very well written (except few grammatical mistakes as suggested) and focussed on an important research topic.\", \"Non linear quantization issue is handled in an elegant way where all the computation can be performed in low bit initially and based on the magnitude of the gradient, some of the computation is performed again at higher precision.\", \"Results suggest that this method has potential to improve quantization of non linear regions.\", \"This work may have usefulness when a hardware with relevant support is available.\"], \"weaknesses\": [\"One of the main weakness of this paper is lack of results computed on a real hardware (for a obvious reason that such hardware is not available)\", \"It is perfectly fine to estimate the energy consumption but I think the assumptions are not complete (please see detailed explanations below)\"], \"l238\": \"INT4 multiply-add cost = 0.065pJ needs a bit more explanation as the numbers from the reference provided does not match. May I request Authors to please provide details of this calculation? Also, INT8, FP16 values are used from(Horowitz, 2014) but I am not sure if INT4 energy consumption will follow the same pattern. can Authors provide additional facts to justify this assumption?\", \"l321\": \"Can Authors please elaborate more on how to handle the conditional branch, any analysis that suggests that the impact will be negligible ?\", \"l372\": \"Can Authors please explain the the relationship of sparsity and NAQ? If the relationship is not established, I would suggest to remove this reference to improve the clarity of the paper.\", \"l380\": \"My assumption is that comparison is done by adding NAQ only on the first MLP and attention layer, Can Authors please confirm the same? Also, if it is applied to only one layer (first layer), then can Authors provide additional results for other layers and all layers?\", \"l492\": \"10% of energy saving is not very high especially since it is not considering other complexity that this method has added and not considered in energy saving computation.\", \"grammer\": \"\", \"l121\": \"Use ReLU consistently (RELU and ReLU are interchangeably used)\\nL368 \\\"there is NOT\\\" -> \\\"There is no\\\"\", \"questions\": \"Please answer these questions and I will be able to re-visit the score accordingly.\", \"l034\": \"How about the dynamic range of pre-non linearity values? Does it have high dynamic range and how will it impact the computation of number of bits needed for quantization. Authors can provide an analysis of the dynamic range of pre-nonlinearity values across different models and datasets, and discuss how this impacts their quantization approach.\", \"l464\": \"vit_small_patch16_224 energy saving is between 0-10%. Is it because NAQ is applied only on one of the layers ? If this is true, I suggest to expand this study for more layers.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
EWcOEZa6Ee
Nash-GBML: Nash Gradient-Based Meta-Learning
[ "Jaeyeon Jo", "Jinkyoo Park" ]
Meta-learning has been proposed to address fast adaptation to unseen tasks with little data. Traditional meta-learning is modeled as the Single-Leader Multi-Follower game consisting of inner and outer-level problems to minimize average or worst-case task loss. Because they assume all sampled tasks are independent, it reduces the flexibility of modeling complex interaction among tasks. Thus, we formulate meta-learning as a Single-Leader Multi-Follower game by considering the interaction among tasks at the inner level. We propose the Nash-GBML incorporating a penalty term into the task loss function to model the interaction among task-specific parameters. We discuss the iteration complexity and convergence of the Nash-GBML algorithm. To validate our Nash-GBML algorithm, we introduce two penalty terms, which are designed to reduce the average and worst-case task loss. We empirically show that the Nash-GBML with the proposed penalty terms outperforms traditional GBML for supervised learning experiments.
[ "model-agnostic meta-learning", "gradient-based meta-learning", "game theory", "nash game", "single-leader multi-follower game" ]
https://openreview.net/pdf?id=EWcOEZa6Ee
https://openreview.net/forum?id=EWcOEZa6Ee
ICLR.cc/2025/Conference
2025
{ "note_id": [ "gXXJqu0rTa", "d8p60PlDoJ", "ZJCPf5sjnY", "VgGarzcFQP", "JloCmyCY1q" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1731465471703, 1730512313288, 1730574706676, 1730701789492, 1729992403125 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2844/Authors" ], [ "ICLR.cc/2025/Conference/Submission2844/Reviewer_LL1U" ], [ "ICLR.cc/2025/Conference/Submission2844/Reviewer_NC8E" ], [ "ICLR.cc/2025/Conference/Submission2844/Reviewer_d6AL" ], [ "ICLR.cc/2025/Conference/Submission2844/Reviewer_yoqP" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper proposes modeling the optimization procedure of meta-learning as a single-leader multi-follower game, with player interactions encoded by penalty terms that effectively act to constrain the within-task learners. They derive an optimization procedure for their approach and validate it using experiments and theoretical analysis.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The use of this type of game-theoretic approach to meta-learning is novel and a potential intereseting direction for research, assuming a motivation for tasks not being independent can be found.\\n2. The authors demonstrate some advantage of their approach experimentally.\", \"weaknesses\": \"1. The paper states that traditional meta-learning assumes tasks are independent and proposes an approach that should perform better if they are not. However, this setup is not motivated at all: in standard evaluations of meta-learning (few-shot classification, meta-RL, federated learning) the tasks are usually independent, at least in the statistical sense. What is the application here? Within the formulation (e.g. Section 3.1) there is an introduction of penalty terms to \\u201caccount for the influence of the other task-specific parameters\\u201d but what does that actually mean mathematically? Why should the tasks be influencing each other? In Section 3.3 there seems to be some indication of a statistical motivation (\\u201cclose to each other for effective few adaptations to fit new tasks\\u201d) or an optimization motivation (\\u201callowing the algorithm to converge more stably\\u201d) but there is no real mathematical justification of either. The exact penalty terms proposed are also not strongly justified beyond allusions to robustness and worst-case vs. average-case.\\n2. It is unclear to me that the SLMF formulation reflects the goals of meta-learning, which is few sample generalization from unseen tasks. Rather the paper seems to be trying to model the training procedure of meta-learning (without really getting at the statistical aspect).\\n3. The theory mostly shows correctness but does not suggest any theoretical advantage of using this approach over others.\\n4. It is unclear to me why the specific penalty terms helped experimentally, or whether similar advantages could not be gained using conceptually simpler regularization approaches. In effect, this issue is similar to the first weakness, in that it is unclear to me what a task interaction is and thus why this approach should be the right way to model it.\\n5. Code is not provided.\", \"questions\": \"1. I am not sure if it is accurate to say that \\u201cTraditional meta-learning is modeled as the Single-Leader Multi-Follower game\\u201d as usually meta-learning is not formulated in game-theoretic terms. Perhaps it is more accurate to say that this paper formulates meta-learning as such a game, shows how past work fits within this formulation, and proposes a generalization of the formulation to non-independent tasks.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper views meta-learning with a game-theoretic lens, where inner-level learning is considered a Nash game and outer-level learning is considered a Single-Leader Multi-Follower (SLMF) game. Based on these, the authors propose Nash-GBML, which incorporates additional penalty terms (centroid penalty, robust penalty) to improve performance. Convergence analysis is provided.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Considering game theory for meta-learning is somehow interesting.\", \"weaknesses\": \"1. The paper is poorly written.\\n\\t* The claims are vague. For example, the authors said it considers interactions among tasks compared to previous independent assumptions. However, I do not see an obvious relation between the proposed penalty terms and the task interaction. The penalty terms are very similar to those in previous meta-learning approaches with independent assumptions, such as [1,2, 3]\\n\\t* The theorems are badly formulated, with unclear assumptions and undefined notations. \\n2. Novelty.\\n\\t* In terms of the algorithm, I do not find any significant difference from previous works.\\n3. What is the benefit of using Nash-equilibrium in Algo.2? It's very unclear to me the difference compared to gradient descent. \\n4. The empirical improvement is very limited.\\n5. A large amount of related works are missing, which also consider different penalty terms.\\n * Probabilistic meta-learning [1, 2, 3]. \\n * Meta-learning theory with i.i.d. assumption [4,5,6].\\n * Meta-learning theory without i.i.d. assumption [7,8,9].\\n * Convergence of meta-learning [10, 11].\\n * Robust meta-learning [12].\\n\\n\\n[1] Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas Griffiths. Recasting\\ngradient-based meta-learning as hierarchical bayes. In International Conference on Learning\\nRepresentations, 2018.\\n\\n[2] Jaesik Yoon, Taesup Kim, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, and Sungjin Ahn.\\nBayesian model-agnostic meta-learning. In Advances in Neural Information Processing Systems,\\npages 7332\\u20137342, 2018.\\n\\n[3] Chelsea Finn, Kelvin Xu, and Sergey Levine. Probabilistic model-agnostic meta-learning. In\\nAdvances in Neural Information Processing Systems, pages 9516\\u20139527, 2018.\\n\\n[4] Anastasia Pentina and Christoph Lampert. A pac-bayesian bound for lifelong learning. In\\nInternational Conference on Machine Learning, pages 991\\u2013999, 2014.\\n\\n[5] Giulia Denevi, Carlo Ciliberto, Riccardo Grazzi, and Massimiliano Pontil. Learning-to-learn\\nstochastic gradient descent with biased regularization. In International Conference on Machine\\nLearning, pages 1566\\u20131575. PMLR, 2019.\\n\\n[6] Maria-Florina Balcan, Mikhail Khodak, and Ameet Talwalkar. Provable guarantees for gradientbased meta-learning. In International Conference on Machine Learning, pages 424\\u2013433. PMLR,\\n2019.\\n\\n[7] Anastasia Pentina and Christoph H Lampert. Lifelong learning with non-iid tasks. Advances in\\nNeural Information Processing Systems, 28, 2015.\\n\\n[8] Mikhail Khodak, Maria-Florina Balcan, and Ameet Talwalkar. Adaptive gradient-based metalearning methods. arXiv preprint arXiv:1906.02717, 2019\\n\\n[9] Chen, Qi, et al. \\\"On the stability-plasticity dilemma in continual meta-learning: theory and algorithm.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[10] Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. On the convergence theory of gradientbased model-agnostic meta-learning algorithms. In International Conference on Artificial Intelligence and Statistics, pages 1082\\u20131092. PMLR, 2020.\\n\\n[11] Kaiyi Ji, Junjie Yang, and Yingbin Liang. Multi-step model-agnostic meta-learning: Convergence and improved algorithms. arXiv preprint arXiv:2002.07836, 2020.\\n\\n[12] Wang, Qi, et al. \\\"A simple yet effective strategy to robustify the meta learning paradigm.\\\" Advances in Neural Information Processing Systems 36 (2024).\", \"questions\": \"1. Can the authors elaborate on what kind of interaction you are modelling?\\n2. Line 208 average or worst-case? Which one exactly?\\n3. what is the definition of $w(\\\\cdot)$ in eq (13)?\\n4. In Theorem 3.2 and 3.3, the convex assumption of what?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper dresses gradient-based meta-learning in a game-theoretic framework with the goal of studying how solving tasks in succession contrasts with solving them independently. In that sense, the authors\\u2019 aim to study the interaction among tasks, and the resulting effect on the final meta-learned solution. Based on their framework, they propose two regularization penalties for gradient-based meta-learning. Theoretically, they show convergence and obtain convergence rates in the convex setting. Empirically, they show their penalties outperform MAML on synthetic sinusoid regression tasks, on small image completion tasks, and on image classification tasks.\\n\\nWhile not completely novel, the game-theoretic perspective of meta-learning is understudied and this work could help extend it. Unfortunately I found the game-theory language mostly distracting for this work and it\\u2019s unclear to me why the paper benefits from it. The proposed penalties can be motivated without it and, as far as I can tell, the convergence results don\\u2019t need it either. I also think the convex assumptions for the theory is too strong (see below) and empirically these penalties only marginally outperform the baselines.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"I appreciate the authors\\u2019 effort to study meta-learning under a different lens \\u2014\\u00a0here, game theory. In principle their exposition could help bring meta-learning to the attention of a different community.\", \"I also appreciated the authors\\u2019 choice to include empirical results on non-traditional tasks, namely image completion. Too often, meta-learning papers focus on the same benchmarks (mini-imagenet, cifar-fs) so seeing positive results on new tasks is encouraging. Especially so, since the authors try their proposed method on two meta-learning algorithms and see (minor) gains in both.\"], \"weaknesses\": [\"This paper is difficult to read, in part due to clarity and in part due to the game theory. The clarity issues should be easily fixable: for example, using log-scale for the y-axis of Fig. 2 should help differentiate between methods; similarly, the authors can probably fix their Fig. 2 to highlight the difference between GBML and Nash-GBML \\u2014\\u00a0what are the green \\u201cinteractions among tasks\\u201d spring concretely? This point is central to the paper and yet the authors never explicitly define it.\", \"On the other hand I found the game theory framework unnecessary and ultimately obscures the exposition. I do not understand what the authors get from it over an optimization or probabilistic framework. The two penalties they introduce are not tied to the framework, nor are the theoretical results. It feels like the authors tried to shoehorn meta-learning into game theory math.\", \"Theoretically, I think the convexity assumptions is too strong. The gradient-based meta-learning problem is non-convex even for linear regression tasks, so it\\u2019s not clear when the paper\\u2019s results would apply.\", \"Empirically, the proposed penalties never significantly outperform the baselines without the penalties. In most results, the baselines get within confidence intervals of the penalized versions despite having one less hyper-parameter. Thus the experimental results alone are not too compelling either.\"], \"questions\": \"Please rebute the weakness claims above. Specifically, could the authors clarify what benefit the game theoretic formulation offers for the proposed penalties?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies a new scenario of meta-learning that considers the interaction among tasks at the inner level. It formulates meta-learning as a Single-Leader Multi-Follower game and proposes an algorithm, called Nash-GBML to solve the new optimization problem.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The paper considers a new formulation of meta-learning. The setting seems interesting.\\n\\nThe experimental results are complete.\", \"weaknesses\": \"The motivation for the new meta-learning formulation, which considers the interaction among tasks at the inner level, is unclear and confusing. This paper adds a penalty term in the inner level of MAML, which satisfies the new formulation. This term seems to be related to the robustness of the meta-learning. However, if the paper targets to use the penalty term to address the robustness issue, the paper should be compared with many existing papers that consider the same issue.\\n\\nThe contribution of the paper is weak. The game theoretic interpretation of gradient-based meta-learning (Stackelberg game) is well-known. The proposed method, based on the conventional algorithm for the Stackelberg game, provides limited contribution to the algorithm design.\", \"questions\": \"The concept of a Single-Leader Multi-Follower (SLMF) game is not clear. As the conventional meta-learning problem has been formulated as a single-leader multi-follower game, what is the difference between the traditional meta-learning formulation and the new proposed formulation?\\n\\nIf the motivation for the penalty term is addressing the robustness issue of meta-learning, the experiment should take the existing robust meta-learning as the baseline.\", \"confusion_about_the_meta_test_stage\": \"if we consider the iteration between different lower-level tasks, how does the method do the meta-test when a single new task is given?\\n\\nAs claimed in the paper, the method with the penalty term holds the benefit of robustness and drawback of the overall average performance. Why the performance of the proposed method can achieve better robustness and better overall average performance than the original version (in Tables 3 and 4)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
EWQaqDgXgr
Sparse Repellency for Shielded Generation in Text-to-Image Diffusion Models
[ "Michael Kirchhof", "James Thornton", "Pierre Ablin", "Louis Béthune", "Eugene Ndiaye", "marco cuturi" ]
The increased adoption of diffusion models in text-to-image generation has triggered concerns on their reliability. Such models are now closely scrutinized under the lens of various metrics, notably calibration, fairness, or compute efficiency. We focus in this work on two issues that arise when deploying these models: a lack of diversity when prompting images, and a tendency to recreate images from the training set. To solve both problems, we propose a method that coaxes the sampled trajectories of pretrained diffusion models to land on images that fall outside of a reference set. We achieve this by adding a simple repellency term to the diffusion SDE throughout the generation trajectory, that is triggered whenever it is expected to land too closely to an image in the shielded reference set. Our method is sparse in the sense that these repellency terms are mostly zero and inactive, even more so towards the end of the generation trajectory. Our method, named SPELL for sparse repellency, can be used either with a static reference set that contains protected images, or dynamically, by updating the reference set at each timestep with the expected images concurrently generated within a batch. We show that adding SPELL to popular diffusion models improves their diversity while impacting their FID only marginally, and performs comparatively better than other recent training-free diversity methods. Moreover, we demonstrate how SPELL can ensure a shielded generation away from a very large set of protected images by considering all 1.2M images from ImageNet as the protected set.
[ "Diffusion Model", "Guidance", "Repellency", "Diversity" ]
Reject
https://openreview.net/pdf?id=EWQaqDgXgr
https://openreview.net/forum?id=EWQaqDgXgr
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yNmBhCXyli", "xi3DFD0H4m", "xVCFEpzV2n", "xOgRnSH5vU", "qYqyFb1VVk", "lY9FRNa2rg", "kOKmgTMJFH", "jvjVmGWWi6", "huAK1FXlsh", "gKTAjj2QaF", "g25QpYC8dl", "bfbKvoimSi", "YhOLikpgAl", "TskYmHO937", "PUJ1piKUZp", "PLEWL6NKef", "OVyAWi4RR0", "LjsvksizLP", "KdRnGzW7lH", "JKQFraqhlu", "FAu0aqy9Jq", "FAMxfAyHjp", "EzntMmmbws", "31ufSkCJjo", "1eMkPrq2Op" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732702878213, 1731629111804, 1732629877391, 1730816390460, 1731920474978, 1731954508041, 1731627299317, 1732639289499, 1731954876926, 1730717669262, 1731625459100, 1732038353583, 1737524132741, 1732472627425, 1731990191230, 1732682594801, 1732316059546, 1732674680666, 1731627661474, 1734942934978, 1732931243417, 1730692977748, 1732702521137, 1731630158271, 1730567680620 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11592/Authors" ], [ "ICLR.cc/2025/Conference/Submission11592/Authors" ], [ "ICLR.cc/2025/Conference/Submission11592/Authors" ], [ "ICLR.cc/2025/Conference/Submission11592/Reviewer_EnjK" ], [ "ICLR.cc/2025/Conference/Submission11592/Reviewer_EnjK" ], [ "ICLR.cc/2025/Conference/Submission11592/Authors" ], [ "ICLR.cc/2025/Conference/Submission11592/Authors" ], [ "ICLR.cc/2025/Conference/Submission11592/Reviewer_H25b" ], [ "ICLR.cc/2025/Conference/Submission11592/Authors" ], [ "ICLR.cc/2025/Conference/Submission11592/Reviewer_V3e4" ], [ "ICLR.cc/2025/Conference/Submission11592/Authors" ], [ "ICLR.cc/2025/Conference/Submission11592/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11592/Reviewer_H25b" ], [ "ICLR.cc/2025/Conference/Submission11592/Reviewer_EnjK" ], [ "ICLR.cc/2025/Conference/Submission11592/Reviewer_94st" ], [ "ICLR.cc/2025/Conference/Submission11592/Authors" ], [ "ICLR.cc/2025/Conference/Submission11592/Reviewer_V3e4" ], [ "ICLR.cc/2025/Conference/Submission11592/Authors" ], [ "ICLR.cc/2025/Conference/Submission11592/Area_Chair_JGeL" ], [ "ICLR.cc/2025/Conference/Submission11592/Reviewer_94st" ], [ "ICLR.cc/2025/Conference/Submission11592/Reviewer_94st" ], [ "ICLR.cc/2025/Conference/Submission11592/Authors" ], [ "ICLR.cc/2025/Conference/Submission11592/Authors" ], [ "ICLR.cc/2025/Conference/Submission11592/Reviewer_H25b" ] ], "structured_content_str": [ "{\"comment\": \"Thank you again for engaging with us in this rebuttal, we are grateful for your time and your many insights that have helped us improve the submission.\\n\\n> **I am now very confident to recommend accepting this paper (somewhere around a score of 7), while staying conservative and uncertain on higher scores.**\\n\\nWe are grateful for your acceptance recommendation. \\n\\nOn the point of selecting a score that properly reflects your recommendation, we believe this is of course, ultimately, your decision and yours only.\\n\\nSince the current scale at ICLR does not include a numerical rating of 7 nor 9, the alternatives are \\n* _6: Marginally above the acceptance threshold_\\n* _8: Accept, good paper_\\n* _10: Strong accept, should be highlighted at the conference_\\n\\nWith these text labels attached to the numerical scores in mind, we do not believe that an 8 in ICLR should be treated in this scale as a \\u201chigh score\\u201d as in other conferences (in ICML/NeurIPS, it would be _\\u201c8: Strong Accept\\u201d_). It just means a recommendation for acceptance, expressing confidence that you believe the paper will be an interesting addition to the program (as a poster or else), so the ICLR _\\u201c8: Accept\\u201d_ is more similar to a 7 in other venues (ICML/Neurips defines _\\u201c7: Accept\\u201d_). A 6, on the other hand, means that you are more undecided about its fate, erring on the side of caution.\\n\\n> **The idea itself, regardless of how the experiments perform, is worth publishing.**\\n\\nMany thanks for this assessment.\\n\\n> **the experiments, both qualitative and quantitative, still have certain imperfections, and I'd refer to other reviewers on this point.**\\n\\nIf you have other specific concerns or questions on our experiments, we will do our best to answer them. We remain at your disposal to clarify them and welcome this opportunity to further improve the paper. At this point, we believe that we have answered all outstanding questions, as summarised in our general answer.\"}", "{\"title\": \"Reviewer 94st: Rebuttal discussion\", \"comment\": \"We are very grateful for your detailed review, your encouragements and your score of *4* for presentation. Here are a few answers to your concerns:\\n\\n> **While SPELL can be used for any diffusion pipelines, the effectiveness of SPELL for smaller models or domains other than ImageNet is not fully investigated.**\\n\\nIt was very tempting for us to use text2image as a testing ground for SPELL because this facilitates visual inspection of results and is well suited to the ICLR audience. Additionally, since one of our main claims is that our interventions are sparse (unlike PG) and can scale, considering very large reference sets (e.g. 1.2M images in ImageNet) was crucial. We believe that using our code will run into less challenges on smaller scale problems and other fields.\\n\\n> **SPELL does not provide a very tight guarantee to avoid generations of similar images to the reference set, which can limit the applicability of SPELL for high-risk cases.**\\n\\nThanks for this great point. SPELL is already the first method that is close to a perfect guarantee due to its geometric intervention rules (Eq. 5). In the protection experiment in S.4.6, we previously showed a 99.45% protection rate for EDMv2 + SPELL, compared to 92.40% for the bare EDMv2.\\n\\nIn fact, we can introduce a tradeoff, where the fast-NN search is more accurate (and longer) by increasing the number of searched Voronoi cells. We can reach a 99.84% success rate with a more accurate search (see updated Table 2 in revision). We have not optimized yet the parameters of the fast-NN search (currently this is run, sub-optimally, on CPU). We believe these overheads can be significantly reduced.\\n\\n| Model \\t| Searched cells | Time per image (s) \\u2193 | Generated images far enough from all ImageNet images \\u2193 |\\n|---------------|----------------|-----------------------|---------------------------------------------------------|\\n| EDMv2 \\t| - \\t| 2.434 \\t| 92.40% \\t|\\n| + SPELL \\t| 1 \\t| 4.633 \\t| 98.92% \\t|\\n| + SPELL \\t| 2 \\t| 6.057 \\t| 99.45% \\t|\\n| + SPELL \\t| 3 \\t| 7.790 \\t| 99.67% \\t|\\n| + SPELL \\t| 5 \\t| 9.949 \\t| 99.78% \\t|\\n| + SPELL \\t| 10 \\t| 13.545 \\t| 99.84% \\t|\\n\\n\\n> **Applying repellency terms based on the current state x_t instead of the expected final output seems applicable for the intra-batch repellency case, providing better diversity to a batch of generated images. Can it be one of the baselines to compare SPELL for the intra-batch case?**\\n\\nThanks for this great comment.\\n\\nWhen the repellency terms only depend on the current state $x_t$ (and when the repellency terms are the grad of an interaction potential, and we restrict to intra-batch repellency) what you suggest is exactly particle guidance ([Corso et al. 24]), as detailed in lines 250~265. PG is featured prominently in our experiments (it's one of the baselines in Fig. 4, and added to Table 4 in the revised paper).\\n\\nNote however that PG interactions are not sparse, and cannot therefore be realistically extended to our large scale protection experiment.\\n\\n> **It seems like SPELL forces each trajectory to arrive near the boundary of other balls (shields).**\\n\\nWe will clarify this, but this is not really the case.\\n\\nWhat you describe would be somewhat our worst-case scenario, in which SPELL would fulfill the shielding requirement only when applied _at the last timestep_.\\n\\nInstead, as depicted in Fig. 1, when SPELL detects that a generation at time $t$ is _pointing_ towards a shield, it alters the diffusion direction to make it point outside of that shield. Ideally, these modifications happen as early as possible so that the diffusion can explore a different mode instead and perturb as little as possible the later (image-detail defining) stages of the diffusion, to avoid visual artifacts (Fig. 17).\\n\\nLuckily, following our intuition, SPELL interventions occur mostly in early diffusion timesteps, as shown in the density plot in Fig. 5a, that shows many more interventions early on in diffusion at time $t=1$. This is tightly connected to the radius that is chosen (Fig. 5b) and is indeed confirmed when $r=20$. This is also extensively documents in Figures 9~16. \\n\\n> **Can further methods (something similar to momentum or just larger overcompensation) improve the diversity of generated images?**\\n\\nWe have not tried momentum, but this is a great suggestion. A small overcompensation of 1.6 seems to work very well. One combination we explored is to both use a lower CFG weight to increase diversity and add SPELL. In Appendix I, Figures 18 - 24, we find that even when we already increase diversity by decreasing the CFG weight, adding SPELL still boosts further diversity.\"}", "{\"comment\": \"Thank you for taking the time for such an exceptional in-depth discussion! We\\u2019ve generated some more data to follow up on your intuition regarding shutterstock overlays.\\n\\nWe\\u2019ve generated 1600 examples for Simple Diffusion without SPELL and 1600 with SPELL. Without SPELL, **62/1600** images have a shutterstock (or similar) overlay, with SPELL it\\u2019s **105/1600**.\\n\\nTo confirm that this is a stable trend, we\\u2019ve also generated images with Latent Diffusion (which was trained on the same dataset as Simple Diffusion), where it\\u2019s **79/800** without SPELL and **98/800** with SPELL. While the second result could still be a random chance (Chi-Square independence test with Yates' continuity correction gives p-value = 0.15), the first result is beyond random (p=0.001), and also the effect size is quite measurable (7% vs 4% overlay rate).\", \"two_observations\": \"- We have noted that the copyright overlays tend to happen clustered at specific prompts. E.g., one motorcycle prompt has 21/32 images with overlay, and many other prompts have 0. So the distribution is quite skewed and seems to depend on the prompt. \\n- We have calculated pairwise distances within generated batches and found that the watermarks can serve to push away images from similar ones without the watermark and to pull together images with the same watermark. \\n\\nIn our understanding the copyright overlays might serve as a \\u201chighway\\u201d between modes, which could allow SPELL to easily explore new modes. SPELL only uses this if the training images / mode distribution for a given prompt actually includes these overlays. \\n\\nWe can include these findings in the revision. We also believe that it is a good inspiration for future works: SPELL could be applied to only change parts of an image by simply masking the distance calculation. This could for example allow to explicitly suppress copyright overlays if one wishes so, or to remove a part of an image, shield its original content, and guarantee to generate truly new fill-ins with applications in anonymization. However, this would require changing the evaluation protocol severely, so we will denote it as an avenue for future research in the paper in the revision.\"}", "{\"summary\": \"This paper presents a novel technique called SPELL to address the challenge of shielded generation and improve generation diversity in text-to-image diffusion models. SPELL shields the model from replicating protected images and promotes intra-batch diversity by adding sparse repellency terms to the diffusion process, guiding generated images away from a reference set and the images in the same batch. The authors demonstrate the effectiveness of their methods via experiments on the state-of-the-art diffusion models. Compared to the previous works, SPELL achieves the best trade-off between image diversity and generation quality. Further, the authors empirically show that SPELL is scalable with a large reference set.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper tackles a timely and practically-relevant problem supported by a fair amount of experiments. Shielded generation of text-to-image diffusion models is an area with limited prior research, making this work particularly valuable.\", \"Overall, the paper is clearly written and easy to follow.\"], \"weaknesses\": [\"One key weakness of this paper is the lack of experiments regarding the main trade-offs against baseline methods. For example, while Table 1 indicates that SPELL has a minor trade-off in precision, the authors do not compare this trade-off with the baselines. Figure 4 is the only comparative result provided, but it lacks an analysis of image quality. Specifically, I would like to know if all models in Figure 4 are capable of similar generation quality, say in terms of FID.\", \"Since SPELL is a training-free sampling method, the authors should also provide a quantitative analysis regarding inference time. For instance, an analysis of average wall-clock time compared with baseline methods, or testing with larger reference dataset sizes, would be helpful for readers.\", \"In the main qualitative analysis in Section 4.5, Figure 6 does not convincingly illustrate improved image generation diversity. For example, the fourth image is repelled from the third image, and it's unclear why this image is closer to the third image than to the first or second. Similarly, in the 13th image, the only notable difference is the color of the ball, and yet the blue ball is the most common color in prior images. Additionally, it would be beneficial if the authors provided examples with multiple image batches, other than a single-image batch.\", \"Regarding Figure 7, I wonder whether the $L_2$ distance-wise nearest neighbor search was the best choice. This is because many images in the third row (EDM + SPELL) seem more similar to the second row (ImageNet neighbor for EDM) rather than the fourth row (ImageNet neighbor for EDM+SPELL).\"], \"questions\": [\"In line 359, it states that \\\"precision and density decrease slightly in 5 out of 6 models.\\\" However, according to Table 1, isn't this actually the case for 4 out of 6 models? Please correct me if I am wrong.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"I deeply appreciate the authors for the additional experiments, which have significantly clarified my understanding. That said, I still have a few unresolved questions:\", \"Regarding Fig. 6: I understand that the 4th image is repelled from the 3rd due to the background. However, it seems that\", \"the altered trajectory of the 4th image share the same background as the original (unaltered path) image. If the 4th image in (SD3+SPELL) had a different background, I would have no doubts about SPELL\\u2019s performance. Did I misunderstand something?\", \"Thank you for the details on wall-clock inference time and the additional trade-off graph in Fig. 4. I now wonder if SPELL is competitive in inference speed compared to the baselines. This is because SPELL seems to be quite costly in Table 2. Could the authors provide another trade-off graph including wall-clock time? I understand this was not part of my initial questions, and I apologize for raising a new point.\", \"I look forward to seeing the multi-batch results.\"]}", "{\"comment\": \"Thank you for joining the discussion! We appreciate your comments and are happy to enhance the revised paper following your suggestions.\\n\\n> **Regarding Fig. 6: It seems that the altered trajectory of the 4th image share the same background as the original (unaltered path) image. If the 4th image in (SD3+SPELL) had a different background, I would have no doubts about SPELL\\u2019s performance. Did I misunderstand something?**\\n\\nIn this particular case, it does seem that the intervention was small. Any interpretation (as the one we made with background in our previous answer) is just an educated guess, as the geometric considerations that triggered SPELL are in latent space.\\n\\nHowever, an interesting point that you highlight is that for intra-batch diversity, or diversity to previously generated images, the most interesting and visible interventions will tend to happen **after** a few images have been generated, because there are more shields to take into account. In that sense, we believe that it is the **second** row in Fig. 6 that conveys best the results of SPELL interventions. This is very visible in iterations 11,14 or 15.\\n\\nThis is what we wrote in the original caption and in the text, notably in **L.455**, _\\u201cEarly images are adjusted less often and mostly in details because they are still novel enough. Later images repel from more previous images and more strongly to ensure they are different enough.\\u201c_, or in the discussion L. 458-464.\\n\\n> **Thank you for the details on wall-clock inference time and the additional trade-off graph in Fig. 4. I now wonder if SPELL is competitive in inference speed compared to the baselines. This is because SPELL seems to be quite costly in Table 2. Could the authors provide another trade-off graph including wall-clock time?**\\n\\nThis is not a problem at all! We are happy to clarify first two important points: \\nThe results for Table 2 (in **Section 4.6**) are for the **protection** task where the goal is to avoid recreating an image too close to the $K=1.2M$ images in imagenet (see. Fig. 7 for an illustration). The overheads are significant because of the sheer size of $K$, and entirely spent on a fast NN lookup. While we believe they could be brought down significantly with additional optimizations (search is currently done on CPU, which is not optimal), we believe the scale there is bound to cause some overhead.\\n\\nThe experiments in Fig.4 (in **Section 4.3**) are for the **generation diversity** task, where the goal is avoid generating images self-similar to previously or concurrently generated images. The scale of the overhead in that case is **truly negligible** when contrasted with the diffusion generation cost (= computing up to ``[B,K]`` distance matrices per time $t$, where both ``B`` and ``K`` do not exceed hundreds, and adding one single correction vector to the score).\\n\\nAs a result, unless a user wishes to explore self-diversity to a previously generated batch of 1M images, the timings in Table 2 have no relevance to the self-diversity experiments.\\n\\n**When looking at intra-batch diversity, we are happy to report the following timings, which agree with intuition presented above**. These timings represent generation time per second, using batches of size ``B=8``. As we expected, the overheads for SPELL and the other batch-diversity methods we benchmarked are negligible when using small batches. We have added this as Table 5 to Appendix G and refer to it in the main text in Section 4.6.\\n\\n|Model|Generation time per image (seconds)|\\n|------------------|----------------|\\n|Baseline (Simple Diffusion)|2.93\\u00b10.12|\\n|Simple Diffusion + PG|2.96\\u00b10.13|\\n|Simple Diffusion + IG|2.93\\u00b10.12|\\n|Simple Diffusion + CADS|2.96\\u00b10.12|\\n|Simple Diffusion + SPELL|2.94\\u00b10.13|\\n\\n\\n> **I look forward to seeing the multi-batch results.**\\n\\nThanks for the suggestion and your patience. **We have now finished generating 400 further example images. We provide these examples of Simple Diffusion with and without SPELL in Figures 25 to Figure 34 in the revised paper.** The 10 prompts are chosen from MS COCO, which Simple Diffusion was not trained on. As opposed to Figure 1, this features both of SPELL\\u2019s capabilities: Intra-batch repellency (every row is a batch of 4), and inter-batch repellency from previous batches, which we treat as the shielded set. The examples affirm qualitatively that SPELL increases the diversity of generated images. Notably, this is without introducing visual artifacts and without lowering the prompt adherence, which other baselines like IG are prone to, see Table 4 and Figure 4.\"}", "{\"title\": \"Reviewer EnjK: Rebuttal discussion (part 2)\", \"comment\": \"> **For instance, an analysis of average wall-clock time compared with baseline methods, or testing with larger reference dataset sizes, would be helpful for readers.**\\n\\n_**When using ImageNet-1k as the reference set (L.983 in our algorithm)**_, this is already detailed in the rightmost column of Table 2, L. 492. In that experiment with a reference set of over 1M images, using the same model, generation time goes from 3.5s to 6s using SPELL. \\n\\nThe overhead of 2.5s is mostly due to the fast nearest-neighbor (NN) search to the $K=1,281,167$ reference set images at each $t$ during generation. That time grows _at most_ linearly with $K$ when adding more neighbors. \\n\\nWe have extended **Table 2 with more parameters** (see revision), where we control more accurately the quality of fast-NN search by increasing the number of Voronoi cells that the NN algorithm calculates distances for, which controls both accuracy and compute. As shown below, when going from 1 to 2 cells, runtime increases by ~1.4 seconds, when going from 2 to 3 another 1.7s, from 3 to 5 by 2.2s and from 5 to 10 by 3.6 seconds. Note also that with more inference runtime, we are able to guarantee even better protection rates (0.16% at 10 cells compared to 0.6% in the paper, which used 2 cells). We have not optimized yet the parameters of the fast-NN search (currently this is run, sub-optimally, on CPU). We believe these overheads can be significantly reduced.\\n\\n\\n| Model \\t| Searched cells | Time per image (s) \\u2193 | Generated images too close to ImageNet neighbors \\u2193 |\\n|---------------|----------------|-----------------------|-----------------------------------------------------|\\n| EDMv2 \\t| - \\t| 2.434 \\t| 7.60% \\t|\\n| + SPELL \\t| 1 \\t| 4.633 \\t| 1.08% \\t|\\n| + SPELL \\t| 2 \\t| 6.057 \\t| 0.55% \\t|\\n| + SPELL \\t| 3 \\t| 7.790 \\t| 0.33% \\t|\\n| + SPELL \\t| 5 \\t| 9.949 \\t| 0.22% \\t|\\n| + SPELL \\t| 10 \\t| 13.545 \\t| 0.16% \\t|\\n\\n\\n_**When the reference set is a set of concurrently or previously generated samples (L.991 in our algorithm)**_ to increase diversity, the overhead consists of calculating pairwise distances for up to 128 (expected or realized) images at each diffusion time. That computation is negligible. The runtimes of the base model, SPELL, and also CADS, IG, and PG are equal in this setup. \\n\\n> **For example, the fourth image is repelled from the third image, and it's unclear why this image is closer to the third image than to the first or second.**\\n\\nThis is likely due to background similarity between images 3 & 4 (top row). Note that SPELL intervenes *during generation* and not as a post-processing mechanism: To insist (in the spirit of L.425~), SPELL was *not* used because SD3 Image 4 came out as too close to Image 3; SPELL was used before that, during the generation of Image 4, because (L.463) it \\u201cwas _expected_ to come out too close to the 3rd image\\u201d.\\n\\n> **Additionally, it would be beneficial if the authors provided examples with multiple image batches, other than a single-image batch.**\\n\\nThanks for the suggestion! We plan to generate further examples (we are working to get access to compute resources) and will post these examples here very soon.\\n\\n> **I wonder whether the L2 distance-wise nearest neighbor search was the best choice. This is because many images in the third row (EDM + SPELL) seem more similar to the second row (ImageNet neighbor for EDM) rather than the fourth row (ImageNet neighbor for EDM+SPELL).**\", \"your_question_can_be_parsed_in_2_different_ways\": \"(i) whether to use the L2 distance, and (ii) whether to use L2 of images in the MDTv2 latent space to guide SPELL.\\n\\nOn (i), Fast NN search (e.g. FAISS as used here) is designed to work for L2, and we are hence somewhat stuck with it.\\n\\nOn (ii), indeed, we are not bound to using that representation for SPELL, the user may define any other encoding of interest. In image protection, defining an encoding also defines explicitly the type of protection that is achieved.\\n\\nFor now, our goal in Fig. 7 was to convey clearly that SPELL avoids obvious copies (e.g. images 2 3 4 5 6 7 8) that using EDMv2 directly would output.\\n\\n> **However, according to Table 1, isn't this actually the case for 4 out of 6 models? Please correct me if I am wrong.**\\n\\nWe apologize for not being clear. We write _The third diversity metric, coverage, is also increased in all models except SD3_ (L.323). This is technically correct, because SD3-Medium is considered both in the text2image and class2image tasks. But you are also correct, this is 4/6 _experiments_ in Table 1, we have corrected this entire part in L.358.\"}", "{\"title\": \"Score Adjustments\", \"comment\": \"Thank you for providing a thorough analysis in such a short time. This understanding and the extended variant to apply SPELL on partial images with masks are quite interesting to me.\\n\\nAfter considering the addressed concerns and the opinions from other reviewers, I am now **very confident to recommend accepting this paper (somewhere around a score of 7), while staying conservative and uncertain on higher scores**. The reasonings are as follows:\\n- My recommendation on acceptance is drawn from the well-written paper, the sufficient results on experiments, and particularly, the interesting concept and approach of sparsely penalizing the sampling process. The idea itself, regardless of how the experiments perform, is worth publishing.\\n- The reason why I'm not sure on recommending higher scores is that the experiments, both qualitative and quantitative, still have certain imperfections, and I'd refer to other reviewers on this point. That said, I am extremely grateful to the authors for providing these massive new results in such a short notice.\"}", "{\"comment\": \"We are happy to come back to you with new results that we have added to the revised paper, following your suggestion.\\n\\n> **This paper could benefit from more concrete visual evidences.** / **This paper would also benefit from providing more diversity results, but it is understandable if this is infeasible considering the limited time for rebuttal.**\\n\\n**We have now finished generating 400 further example images. We provide these examples of Simple Diffusion with and without SPELL in Figures 25 to Figure 34 in the revised paper.** The 10 prompts are chosen from MS COCO, which Simple Diffusion was not trained on. The examples affirm qualitatively that SPELL increases the diversity of generated images. Notably, this is without introducing visual artifacts and without lowering the prompt adherence, which other baselines like IG are prone to, see Table 4 and Figure 4.\"}", "{\"summary\": \"This paper introduces a novel guidance mechanism called SPELL (Sparse Repellency) aimed at enhancing diversity and protecting certain reference images during the generation process in text-to-image diffusion models. This approach addresses two common challenges with diffusion models: the tendency to produce repetitive images for the same prompt and the potential risk of inadvertently recreating training images, which raises privacy and copyright concerns. In summary, SPELL is a post-training intervention that enhances image diversity and safeguards specific images by selectively adjusting generation paths. This method offers a practical solution for more diverse and privacy-respecting image generation in diffusion models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well written.\\n2. The addressed problem is important in diffusion models.\", \"weaknesses\": \"I am not familiar with this field, but I find the issue addressed in this paper to be interesting and important. AC can disregard my opinion and score.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reviewer EnjK: Rebuttal discussion (part 1)\", \"comment\": \"We would like to thank Reviewer **EnjK** for their encouraging comments and constructive review. We did our best within the time constraints to address all of the points that you have raised, and will do our best to answer other concerns.\\n\\n> **For example, while Table 1 indicates that SPELL has a minor trade-off in precision, the authors do not compare this trade-off with the baselines. Figure 4 is the only comparative result provided, but it lacks an analysis of image quality. Specifically, I would like to know if all models in Figure 4 are capable of similar generation quality, say in terms of FID.**\\n\\nThis is a great point, thanks for raising it. Our goal (L. 403) was indeed to contrast diversity vs *quality* in 3 different plots, 3 different Pareto fronts, using Precision / Density / Clip scores. Following your request, **we have added a fourth plot in Fig. 4, with FD_DINOv2 vs CLIP Score**, as can be seen in our updated draft. We can add other similar figures if you think they would be relevant.\\n\\n**We have also added the following table (Table 4, p.21 in draft)**, with all metrics, to provide readers with an exhaustive view. \\n\\n|Method|Recall|Vendi Score|Coverage|Precision|Density|FID|$FD_\\\\text{DINOv2}$|CLIP Score|\\n|----------------------------------------------------|--------|-------------|----------|-----------|---------|-------|-----------|--------|\\n|Base Model|0.237|2.527|0.446|0.558|0.768|9.566|105.967|27.789|\\n|Particle Guidance, strength=1024|0.099|1.987|0.249|0.300|0.326|84.115|705.661|24.470|\\n|Particle Guidance, strength=512|0.230|2.753|0.378|0.443|0.534|23.106|286.093|26.740|\\n|Particle Guidance, strength=256|0.252|2.656|0.429|0.523|0.682|11.934|154.897|27.440|\\n|Particle Guidance, strength=128|0.248|2.591|0.447|0.553|0.754|9.442|109.257|27.704|\\n|Particle Guidance, strength=64|0.245|2.561|0.449|0.559|0.771|9.072|101.796|27.781|\\n|Particle Guidance, strength=32|0.235|2.528|0.445|0.557|0.763|9.724|108.382|27.812|\\n|Particle Guidance, strength=16|0.236|2.529|0.446|0.557|0.764|9.596|107.041|27.813|\\n|Interval Guidance, [0.1,0.9]|0.372|2.840|0.455|0.537|0.730|8.385|85.871|27.453|\\n|Interval Guidance, [0.2,0.9]|0.419|2.994|0.442|0.514|0.689|8.359|85.094|26.813|\\n|Interval Guidance, [0.1,0.8]|0.470|3.174|0.448|0.500|0.663|7.507|76.104|27.215|\\n|Interval Guidance, [0.3,0.9]|0.471|3.208|0.421|0.483|0.635|8.406|87.971|25.885|\\n|Interval Guidance, [0.2,0.8]|0.518|3.340|0.434|0.478|0.624|7.478|75.250|26.544|\\n|Interval Guidance, [0.1,0.7]|0.567|3.576|0.432|0.451|0.577|6.804|72.092|26.784|\\n|Interval Guidance, [0.4,0.9]|0.525|3.495|0.395|0.442|0.569|8.623|96.611|24.630|\\n|Interval Guidance, [0.3,0.8]|0.571|3.575|0.411|0.446|0.570|7.556|78.887|25.549|\\n|Interval Guidance, [0.2,0.7]|0.614|3.770|0.417|0.426|0.536|6.771|72.972|25.979|\\n|Interval Guidance, [0.1,0.6]|0.673|4.138|0.396|0.385|0.466|6.885|81.643|26.020|\\n|CADS, mixture factor=0, tau_1=0.6|0.262|2.598|0.447|0.553|0.753|9.248|105.006|27.746|\\n|CADS, mixture factor=0, tau_1=0.7|0.253|2.579|0.448|0.555|0.757|9.288|105.549|27.757|\\n|CADS, mixture factor=0, tau_1=0.8|0.245|2.561|0.449|0.557|0.762|9.356|105.856|27.771|\\n|CADS, mixture factor=0, tau_1=0.9|0.239|2.545|0.450|0.559|0.767|9.452|106.455|27.790|\\n|CADS, mixture factor=0.001, tau_1=0.6|0.325|2.816|0.442|0.531|0.696|8.897|105.081|27.534|\\n|CADS, mixture factor=0.001, tau_1=0.7|0.297|2.734|0.446|0.540|0.719|8.963|104.006|27.617|\\n|CADS, mixture factor=0.001, tau_1=0.8|0.277|2.660|0.447|0.548|0.739|9.098|103.766|27.697|\\n|CADS, mixture factor=0.001, tau_1=0.9|0.256|2.588|0.448|0.554|0.755|9.273|105.268|27.754|\\n|CADS, mixture factor=0.002, tau_1=0.6|0.425|3.208|0.417|0.472|0.584|9.870|129.159|26.920|\\n|CADS, mixture factor=0.002, tau_1=0.7|0.380|3.028|0.429|0.501|0.637|9.143|114.333|27.242|\\n|CADS, mixture factor=0.002, tau_1=0.8|0.330|2.837|0.442|0.529|0.692|8.893|105.511|27.506|\\n|CADS, mixture factor=0.002, tau_1=0.9|0.277|2.660|0.446|0.548|0.739|9.098|103.762|27.696|\\n|SPELL, shield radius=40|0.370|2.998|0.437|0.500|0.631|13.072|140.841|27.397|\\n|SPELL, shield radius=35|0.359|2.935|0.445|0.518|0.665|11.452|120.346|27.556|\\n|SPELL, shield radius=30|0.337|2.856|0.451|0.531|0.695|10.349|106.753|27.655|\\n|SPELL, shield radius=25|0.312|2.774|0.454|0.542|0.723|9.794|100.123|27.739|\\n|SPELL, shield radius=20|0.287|2.691|0.455|0.552|0.746|9.535|98.666|27.781|\\n|SPELL, shield radius=15|0.263|2.616|0.454|0.558|0.762|9.558|100.709|27.811|\"}", "{\"title\": \"Many thanks for reading our rebuttal.\", \"comment\": \"We are grateful for your grade increase, and we remain available to answer other concerns or requests for clarifications!\\n\\nAuthors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": [\"Thank you so much for your timely response and your additional results. After carefully reviewing these new results and comments, I'm summarizing my previously mentioned three major concerns.\", \"All the trade-off experiments provided good overall results compared to other methods. These results in Fig. 4 and Tab. 4 are not perfect, but good enough to convence me on this point.\", \"The reasoning that SPELL is NOT a special case of PG well-convinced me. SPELL is indeed a sparse method that is only triggered on certain conditions, while PG is always adding a gradient term that manipulates the sampling process.\", \"New qualitative results from Fig. 25 to Fig. 34 are generally good. Although the differences with and without SPELL on some cases are not obvious or straightforward, the overall performances would be sufficient.\", \"That said, I still have a minor question just out of curiosity. In Fig. 29(b), there are watermarks saying \\\"ShutterStock\\\" in some of the generated samples, which clearly is resulting from the WebVid dataset. Since the samples without SPELL applied are not producing such watermarks, I'm therefore wondering does it have anything to do with SPELL?\"]}", "{\"comment\": \"Thank you for your prompt response. Most of my concerns have been addressed. I thus raise my score from 5 to 6 and lean towards acceptance.\"}", "{\"comment\": \"Thank you for the detailed response about my review, and sorry for the late reply.\\n\\nI have one remaining question regarding the first question. If I understand correctly, the major difference between the proposed SPELL and PG is where the distance between two points is computed (the space of the expected final output for SPELL and the space of the current value for PG). Does the sparsity property of SPELL come from this distinction or some differences in the detailed mechanism, such as one used to find neighbors to repel?\"}", "{\"title\": \"General response, a few days before rebuttal closing.\", \"comment\": [\"Dear Reviewers,\", \"We would like to thank you for your time. We felt very encouraged by your positive comments:\", \"our shielded generation addresses a _\\u201ctimely and practically-relevant problem [...] with limited prior research, making this work particularly valuable\\u201d_ (EnjK, V3e4, 94st, H25b)\", \"our paper and notation is _\\u201cwell-written and easy-to-understand\\u201d_ (H25b, EnjK, V3e4).\", \"our paper has \\u201cthorough empirical investigations\\u201d (94st, EnjK, H25b) that demonstrate that \\u201cSPELL achieves the best trade-off between image diversity and generation quality\\u201d (EnjK)\", \"our paper is among the first to enable image-level protection at large scale _\\u201cwhich is one of the significant challenges for real users\\u201d_ (94st, H25b, EnjK, V3e4).\", \"More importantly, we are grateful for your many insights and questions. They have triggered a few minor modifications. We have updated our draft with all experiments and clarifications you have requested. They are highlighted in blue in the revised paper and include:\", \"An additional ablation that shows that with more inference-time runtime, SPELL shields all 1M+ protected images in 99.84% of the generated images\", \"400 new example images on unseen prompts from MS COCO to qualitatively study SPELL\\u2019s increased diversity (Figures 25-34)\", \"Extended runtime results in both the diversity (Table 5) and the protection experiment (Table 2) that show that SPELL\\u2019s cost is negligible in small batches and scales sub-linearly when shielding 1M+ images simultaneously\", \"A fourth trade-off plot that investigates the FD_DINOv2 in Figure 4\", \"Table 4 which shows all 8 metrics that we have tested in the trade-off experiment for all hyperparameter combinations of SPELL and the three recent baselines\", \"A more detailed comparison between the intra-batch component of SPELL and Particle Guidance,\", \"Standard deviations across five seeds for all six diffusion models in Table 1\", \"We are very grateful for Reviewer **EnjK**'s interaction, and for their subsequent decision to increase their general score $5\\\\rightarrow 6$.\", \"We remain available for further requests for clarification or questions, for the remainder of the rebuttal period.\", \"The Authors\"]}", "{\"comment\": \"Thank you for your reply!\"}", "{\"title\": \"Reviewer V3e4: Rebuttal discussion\", \"comment\": \"Many thanks for reading our paper despite this not being your area of expertise. Still, we are very happy to see that despite this mismatch you had a positive impression of our paper overall. ICLR papers should be tailored to reach a wide readership, and we took great efforts to make our paper easy to parse, through e.g. Fig. 1 and other illustrations.\\n\\n> **I find the issue addressed in this paper to be interesting and important.**\\n\\nIndeed, we believe these are core issues that appear naturally when letting end-users interact with diffusion models.\\n\\nIn our first application (intra-batch diversity), we were happy to advance the recent SOTA set in ICLR 2024 (https://arxiv.org/abs/2310.17347 and https://arxiv.org/abs/2310.13102) and NeurIPS 2024 (https://arxiv.org/abs/2404.07724). \\n\\nAs for our second application (protecting images), we are the first (to our knowledge) to propose a method for this task that works at such a scale, with an impressive reference set of more than 1 million images.\"}", "{\"metareview\": \"This paper proposes a technique for shielding the diffusion model from protected images by adding sparse repellency terms to the diffusion process. SPELL is a post-training intervention that enhances image diversity and safeguards specific images by selectively adjusting generation paths. The problem of shiedling generated images from a set of protected images is a very important problem that needs to be studied. This paper shows some quantitative and qualitative results demonstrating the appraoch.\", \"additional_comments_on_reviewer_discussion\": \"One of the main concerns that reviewers (and I myself) have is the lack of sufficient experimental results, trade-offs, quantitative analysis, challenging text to image cases, etc. The authors addressed some of these concerns in the rebuttal which I appreciate. But, I feel the authors could have done much better in showing the effectiveness of the approach.\\n\\nInstead of just using Imagenet and COCO where it is hard to see the effect of guardrailing, the authors could have use face datasets and shown if some identities are not generated. Use of face recognition networks can give a concrete score for this.\\nAn analysis on size of shielding dataset would be nice to have. For example, if I only have one image per face subject, is it sufficient? Do I need to have multiple samples per identitity for the approach to work well, etc?\\nSome examples on SOTA high resolution image generators like Flux on challenging use cases would be nice.\\n\\nI feel like the paper as such is borderline (excluding reviewer V3e4). Including more concerete qualitative examples would make the paper very strong. So, as such, I regret to inform that the paper is not in a state to accept to a competitive conference like ICLR. I would really encourage authors to take into account all the feedback and resubmit with a strong submission.\"}", "{\"comment\": \"Thank you for your respectful response. I asked since I was confused by the previous answer for my Q1.\\n\\nNow I understand the major contribution of SPELL and its differences from PG.\\n\\nI appreciate your help.\"}", "{\"summary\": \"This paper proposes a novel way to diversify diffusion generations by introducing repellency terms to the diffusion SDE. It achieves the diversity of generated images from one prompt and/or prevention of similar generation to the reference set, which is one of the significant challenges for real users.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"This paper tackles a very practical problem of repetitive generations of text-to-image diffusion models to both protective images in the training set and previously generated images.\", \"The proposed method, repellency terms, has a concrete background and intuitive to solve the problem.\", \"Thorough empirical investigations provide enough understanding of how SPELL can increase the diversity of generated images, and the advantages for real-world applications.\"], \"weaknesses\": [\"While SPELL can be used for any diffusion pipelines, the effectiveness of SPELL for smaller models or domains other than ImageNet is not fully investigated.\", \"The efficiency of SPELL is validated for ImageNet class or simple text prompts where the diversity within a text prompt is huge. Evaluation for more complex text prompts that align with more practical usage of text-to-image diffusion models would validate the effects of SPELL more.\", \"As noted by authors, the proposed SPELL does not provide a very tight guarantee to avoid generations of similar images to the reference set, which can limit the applicability of SPELL for high-risk cases.\"], \"questions\": [\"Applying repellency terms based on the current state x_t instead of the expected final output seems applicable for the intra-batch repellency case, providing better diversity to a batch of generated images. Can it be one of the baselines to compare SPELL for the intra-batch case?\", \"It seems like SPELL forces each trajectory to arrive near the boundary of other balls (shields). Can further methods (something similar to momentum or just larger overcompensation) improve the diversity of generated images?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **Thank you for the detailed response about my review, and sorry for the late reply.**\\n\\nMany thanks for taking the time to read our rebuttal. Your reply is not late at all, since the deadline for discussion has now been extended.\\n\\n> **I have one remaining question regarding the first question. If I understand correctly, the **major difference** between the proposed SPELL and PG is where the distance between two points is computed (the space of the expected final output for SPELL and the space of the current value for PG).** Does the sparsity property of SPELL **come from this distinction** or some differences in the detailed mechanism, **such as one used to find neighbors to repel?**\\n\\nThanks for these questions. SPELL and PG are indeed closely related as we highlight in the paper L.246. \\n\\nTo go back to their definition, the formulation of PG (Eq. 4 in their paper, L.249 in ours) defines an interaction potential (using RBF kernels) on particle positions $x_t$. The interventions that result from that potential are dense (the particle\\u2019s trajectories are always updated, even when they are far away, regardless of being neighbors or not) and exhaustive (all $B^2$ kernel values for a batch of size $B$ impact the trajectory).\", \"spell_moves_away_from_that_model_in_two_orthogonal_aspects\": \"As you point out, SPELL (1) is defined as \\u201cfuture looking\\u201d, in that it considers the expected generation of these points to guide trajectory rather than $x_t$ ; and (2) SPELL uses sparse updates, by definition, in the way we set these interventions in **Eq. (5)**. The notion of neighbours kicks in because of sparsity.\\n\\nOf these two, **the second factor, sparsity of interventions is the major difference, not the \\u201cfuture looking\\u201d aspect**, as already clarified in **L.261** of the revision. While PG could be used in principle with the same \\u201cfuture looking\\u201d approach, the crucial idea in PG is to posit that particles should be constantly re-updated, using a smooth kernel, as is common in the interacting particles literature they refer to (their Section 4). This is different from SPELL.\\n\\nOur decision to use a sparse intervention model is highlighted in the title of our paper and the name of our method. This is the major difference, which leads to:\\n- applicability to large scale protection scenarios, where reference sets are of the order of millions (When generating a batch `B`, PG would need a costly computation of `B x 1M` Gaussian kernel values at each diffusion step if it were adapted to that task, and those 1M contributions would likely have unintended consequences for quality of generation)\\n- qualitative improvements, because sparse interventions result intuitively in far fewer changes in the particle trajectories. **This is our analysis in Fig.5 + 9~16**, as well as results in Fig. 4.\\n\\nHence, as a **TL;DR** summary, SParsity in SPELL is the major differentiator w.r.t. PG. Sparsity comes from geometric considerations (sketched in Fig. 2), and from the definition of our repulsive terms (our choice in Eq. 5). Sparsity does not come from the additional consideration of using \\u201cfuture looking\\u201d $E[X_0|X_t=x_t]$ instead of $x_t$.\"}", "{\"title\": \"Reviewer H25b: Rebuttal discussion\", \"comment\": \"Many thanks for your encouraging comments and for praising our presentation with a score of 4. We are grateful for your many questions which we try to answer within the timeline of this rebuttal.\\n\\n> **The core issue for this paper is the soundness in terms of the superior effectiveness compared to other similar methods. In Fig. 4, there is only comparisons on recall-precision, converage-density, and vendi-clip trade-off, while other concerning metrics in Tab. 1, such as FID, FIDINOv2, are not included. I also failed to find reasoning on why only these three metric-pairs are selected.**\\n\\nThis is a great point. The reason we have used these metric pairs is that they are popular diversity (y-axis) vs. quality (x-axis) pairs. But following your request, we have added a fourth plot in Fig.2, with FD_DINOv2 vs CLIP Score. We have also added Table 4, with all metrics, to provide an exhaustive view of all metrics.\\n\\n\\n> **This discussion of the fundamental methodology difference towards Particle Guidance on Page 5 is not convincing, as it seems SPELL can be treated as a special case of Particle Guidance when the energy potential \\u03d5t is simply calculating the difference.**\\n\\nThanks for this comment. We agree that we were not clear enough in this section, starting with the title _(Intra-batch) SPELL as Particle Guidance_ which was, in retrospect, confusing. We have corrected this section in the paragraph you reference. Given more space we can expand a bit further the discussion. To be more detailed:\\n\\n- PG was proposed for intra-batch diversity following principles from the literature on interacting particles (_not_ to protect generation away from a reference set). We believe this viewpoint has guided two important choices: casting modifications of the trajectories as gradients of an interaction potential + use of a soft-decaying kernel that considers _exhaustively all_ interactions. This means that intra-batch similarity in PG is _always_ guiding / modifying the sampling of particles, throughout time and w.r.t. samples.\\n\\n- By contrast, SPELL is not defined as an energy minimizing principle, but follows instead from geometric principles (Fig. 1) which cannot, to our current knowledge, fit into such a \\u201cgradient\\u201d based perspective (we tried but experimental evidence suggests that SPELL interventions in Eq. 8 are not conservative). By doing away with this \\u201cinteraction energy\\u201d principle we lose PG\\u2019s mathematical interpretation, but we gain efficiency and the ability to simply state our goal of making sure the trajectories almost always flow normally, and are only \\u201cbumped\\u201d when strictly needed (both w.r.t. batch but also in time, see Fig. 5(b), Fig. 9~16). In our view, that sparsity w.r.t samples _and_ time is crucial to scale in our application to image protection, but also to get unperturbed trajectories, and our experiments validate this intuition.\\n\\n> **This paper could benefit from more concrete visual evidences.**\\n\\nThis is a great suggestion. We are getting the computational resources to generate further example images, and will come back to you very soon.\\n\\n> **As it is difficult to show all possible trade-offs, a better way of giving concrete comparisons would be adding detailed tables for each of the methods. Each table shows all results with rows to be parameters, and columns to be all the metrics. Adding such tables would surely address my core concern, but due to limited time, it is also promising if the reasoning of choosing these trade-offs are persuasive and convincing.**\\n\\nThanks for the suggestion! We have added Table 4. We hope that this addresses your core concern.\\n\\n> **I'm generally not quite sure if SPELL could be treated as a special case of Particle Guidance in terms of intra-batch diversity. A brief explanation would be sufficient.**\\n\\nWe hope our answer above assuages your concerns. \\n\\nTo recapitulate, the two fundamental differences with PG lie in (i) adding guidance terms that are _not_ grad-potentials (ii) designing specifically very sparse repellence terms, with sparsity in two senses: \\n- They act on trajectories rarely over time, and most interventions happen early and then vanish, see Fig. 5b. This is thanks to our focus, from the start, on _expected_ generation and not location at time $x_t$.\\n- They only add sparse terms, to the extent that most of them are 0, when comparing to points in the reference set (either self-reference in intra-batch, or external reference for protection). This makes our method scale to 1.2M points as a reference set and leaves dynamics unperturbed *when perturbations are not needed*.\\n\\n> **This paper would also benefit from providing more diversity results, but it is understandable if this is infeasible considering the limited time for rebuttal.**\\n\\nThis is a great point. As mentioned above, we are getting access to resources needed to generate these examples. We will get back to you soon.\\n\\n> **MINOR**\\n\\nThanks for spotting these! We addressed these points in the revision.\"}", "{\"summary\": \"This paper introduces a novel post-training guidance mechanism, SPELL, which primarily addresses the training-set protection issue and the diversity problem of image diffusion models. SPELL is designed to repell the latents away from a trajectory that is close to a protected image set or from other latents within the same inference batch. It dynamically introduce small corrections to the latents in a way that is sparse and only triggered when the predicted trajectory is too closely to a reference domain. The authors evaluate SPELL on multiple state-of-the-art open-sourced diffsion models, showing its effectiveness. They also provide comparisons to other previous approaches that are also aimed at addressing diversity or with protected image set, which show some superior results on selected trading-off plots.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The overall problem that this paper addresses is one of the important issues that current diffusion-based image generation models possess, which adds to the value of motivations for this paper.\\n2. This paper is well-written and easy-to-understand. Notations within the background section and the method section are self-contained and clear to follow. Fig. 2 further adds readability.\\n3. The method this paper proposed is novel, which provides conceptual insights particularly in the method section (Sec. 4).\\n4. Experiments contain both ablation studies and comparisons to other methods. Fig. 3 show the effect of SPELL's only parameter **r**, in which we see effectiveness especially around 10-20.\", \"weaknesses\": \"MAJOR:\\n1. The core issue for this paper is the soundness in terms of the superior effectiveness compared to other similar methods. In Fig. 4, there is only comparisons on recall-precision, converage-density, and vendi-clip trade-off, while other concerning metrics in Tab. 1, such as $\\\\text{FID}$, $\\\\text{FD}_\\\\text{DINOv2}$, are not included. I also failed to find reasoning on why only these three metric-pairs are selected.\\n2. This discussion of the fundamental methodology difference towards **Particle Guidance** on Page 5 is not convincing, as it seems SPELL can be treated as a special case of Particle Guidance when the energy potential $\\\\phi_t$ is simply calculating the difference.\\n3. This paper provides abundant unconditional results with protected image set being ImageNet-1k in Fig. 17 in the appendix, but it seems that the only qualitative results for diversity is in Fig. 1. This paper could benefit from more concrete visual evidences.\", \"minor\": \"1. In contributions, bullet point 3, **generated** is misspelled.\\n2. In contributions, bullet point 2, explanations on the **future looking** feature is quite not straight-forward to understand, I'd suggest keeping it brief here as a bullet point in contributions, and further explain it in method section with mathematical symbols, such as $x_t, x_0$.\", \"questions\": \"1. As it is difficult to show all possible trade-offs, a better way of giving concrete comparisons would be adding detailed tables for each of the methods. Each table shows all results with rows to be parameters, and columns to be all the metrics. Adding such tables would surely address my core concern, but due to limited time, it is also promising if the reasoning of choosing these trade-offs are persuasive and convincing.\\n2. I'm generally not quite sure if SPELL could be treated as a special case of Particle Guidance in terms of intra-batch diversity. A brief explanation would be sufficient.\\n3. This paper would also benefit from providing more diversity results, but it is understandable if this is infeasible considering the limited time for rebuttal.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
EWP9BVRRbA
Effective and Efficient Adversarial Detection for Vision-Language Models via A Single Vector
[ "Youcheng Huang", "Fengbin ZHU", "Jingkun Tang", "Pan Zhou", "Wenqiang Lei", "Tat-Seng Chua" ]
Visual Language Models (VLMs) are vulnerable to adversarial attacks, especially those from adversarial images, which is however under-explored in literature. To facilitate research on this critical safety problem, we first construct a new la**R**ge-scale **A**dervsarial images dataset with **D**iverse h**A**rmful**R**esponses (RADAR), given that existing datasets are either small-scale or only contain limited types of harmful responses. With the new RADAR dataset, we further develop a novel and effective i**N**-time**E**mbedding-based**A**dve**RS**arial **I**mage **DE**tection (NEARSIDE) method, which exploits a single vector that distilled from the hidden states of VLMs, which we call *the attacking direction*, to achieve the detection of adversarial images against benign ones in the input. Extensive experiments with two victim VLMs, LLaVA and MiniGPT-4, well demonstrate the effectiveness, efficiency, and cross-model transferrability of our proposed method. Our code is included in the supplementary file and will be made publicly available.
[ "Visual Language Models", "Adversarial Attacks", "Attacking Directions", "Adversarial Defense", "Detection of Adversarial Samples" ]
https://openreview.net/pdf?id=EWP9BVRRbA
https://openreview.net/forum?id=EWP9BVRRbA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "VTDkolRPIN", "S72kPfE93m", "NDIhMrwMQP", "FTaAeKzEsm", "4O32paSyst" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1731563055207, 1730692439868, 1730554385408, 1730375942199, 1730208803020 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6593/Authors" ], [ "ICLR.cc/2025/Conference/Submission6593/Reviewer_TrC1" ], [ "ICLR.cc/2025/Conference/Submission6593/Reviewer_djPR" ], [ "ICLR.cc/2025/Conference/Submission6593/Reviewer_GRGh" ], [ "ICLR.cc/2025/Conference/Submission6593/Reviewer_6mXm" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper proposes a new method to detect jailbreaking attacks against visual language models (VLMs). The method is based on the observation that large language models contain a set of steering vectors in the intermediate embeddings that can be modulated to generate texts towards certain specific attributes. Based on this observation, the authors propose to (1) build a larger dataset of adversarial attacks, and (2) use the dataset to learn an attack direction in steering vectors as indication of attacks. To validate the idea, the authors created a dataset of 4000 samples, include a 500 sample training set and three test sets. Training set uses images from COCO validation set and queries from the train set of HH-rlhf harm-set. Test sets use COCO test set and queries from the test set of HH-rlhf harm-set, D-corpus, and Harm-Data.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The proposed method is based on an observation cross-validated from multiple previous studies\", \"The proposed method outperformed a baseline\"], \"weaknesses\": [\"The evaluation design is very flawed.\", \"The evaluation does not include performance on benign queries. As a result, how likely the method generate false alarms is unknown.\", \"While the RADAR dataset is larger than previous ones and is generated using different attack queries, all the images are generated in the same way. As a result, the experiment results may not generalize to other attack methods.\", \"The evaluation also lacks experiments on adaptive attacks, i.e., whether it's possible to generate adversarial images that would lead to harmful output yet does not go beyond the learned threshold.\"], \"questions\": [\"What is the false detection rate on common benign datasets (e.g., VQA)?\", \"What is the detection rate against other attack methods?\", \"Can the proposed method detect adaptive attacks?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces an innovative and efficient approach for detecting adversarial images in Vision-Language Models (VLMs), addressing their vulnerability to adversarial attacks. Current datasets and detection methods face limitations, either lacking diversity or being computationally heavy, particularly with low-visibility attacks. Key contributions of this work include a novel method for identifying the attacking direction in VLMs\\u2019 hidden space, which serves as a defense mechanism against adversarial images. This approach is integrated into the NEARSIDE method, which is efficient, requiring only a single forward pass and showing cross-model transferability. The authors also constructed the RADAR dataset, comprising 4,000 high-quality adversarial samples, surpassing previous datasets in scale and harmful response diversity. Through experiments on LLaVA and MiniGPT-4, NEARSIDE demonstrates high accuracy and significant speed improvements over baseline methods, highlighting its potential to enhance VLM security. While promising, further investigation could strengthen its generalizability across different VLMs.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This research presents an innovative approach to detecting adversarial images by pinpointing the attacking direction within the hidden space of Vision-Language Models (VLMs). By focusing on this attacking direction, the method offers a fresh perspective that contrasts with traditional techniques, which often emphasize response discrepancies or image purification. Additionally, the introduction of the RADAR dataset, which captures a wide range of harmful responses, fills a crucial gap in evaluating VLM safety and significantly enhances the quality of related research.\\n\\nThe NEARSIDE method is thoughtfully designed to effectively distinguish between adversarial and benign inputs, demonstrating impressive accuracy and making it practical for real-time applications. Rigorous experiments, including comparisons with the leading JailGuard baseline on the RADAR dataset, confirm the method's effectiveness. The exploration of cross-model transferability and varying perturbation radii further highlights a deep understanding of how the method performs under different conditions.\\n\\nClarity is a strong point of this paper, as it articulately explains the vulnerabilities of VLMs and the urgent need for effective detection strategies. The description of the NEARSIDE method is straightforward, supported by clear illustrations that guide readers through the process. Furthermore, the well-structured presentation of the experimental setup and results makes it easy to follow, allowing readers to appreciate the significance and implications of the research findings.\", \"weaknesses\": \"1.The concept of attacking direction is intriguing, yet the paper does not adequately address its stability across varying training datasets and model architectures. This is particularly relevant given that visual language models (VLMs) are frequently updated in practice. I suggest conducting experiments to evaluate how changes in training data or slight architectural modifications impact the stability of the attacking direction. A deeper analysis of these factors could enhance the understanding of NEARSIDE's detection performance.\\n\\n2.The evaluation of the proposed method is primarily limited to LLaVA and MiniGPT-4, raising concerns about its applicability to other VLMs and LLMs with different architectures. While the initial cross-model transferability analysis is a good starting point, a more extensive evaluation is necessary. I recommend testing NEARSIDE across a broader spectrum of models, particularly those with varied visual encoders and training methodologies, to better assess its generality and potential need for modifications.\\n\\n3.The current focus on detecting adversarial images generated by existing techniques overlooks the possibility of attackers developing new strategies to evade detection. As the landscape of adversarial attacks evolves, it is crucial to evaluate NEARSIDE's resilience against these future threats. Conducting simulations of more sophisticated adaptive attacks could provide insights into potential countermeasures and enhance the robustness of the detection method.\\n\\n4.The paper acknowledges that the detection threshold, determined from the training set, may not be optimal across all datasets. However, it lacks a thorough investigation into dynamic threshold adjustment methods. I recommend exploring adaptive thresholding techniques or statistical analyses that could lead to more reliable threshold settings. Additionally, a detailed examination of how different thresholds affect false positive and false negative rates would be beneficial.\\n\\n5.Although NEARSIDE is claimed to be efficient relative to baseline methods, the computational complexity analysis is somewhat superficial. The current focus on inference time neglects other critical factors, such as memory usage and training costs associated with extracting the attacking direction. A more comprehensive breakdown of computational costs throughout the process is essential. Discussing potential optimizations could further enhance NEARSIDE's scalability and performance in practical applications.\", \"questions\": \"No further questions; the suggestions have been fully covered in the **Weaknesses** section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper makes two main contributions to address the under-explored problem of adversarial image attacks on Visual Language Models (VLMs). First, it introduces RADAR, a novel large-scale dataset of adversarial images that elicit diverse harmful responses from VLMs. Second, it proposes NEARSIDE, an efficient detection method that leverages a distilled \\\"attacking direction\\\" vector from VLM hidden states to identify adversarial images. The effectiveness of NEARSIDE is validated through extensive experiments on LLaVA and MiniGPT-4, demonstrating strong performance and cross-model transferability. While the authors acknowledge limitations in covering all possible harmful contents and the specific scope of their detection method, their work provides valuable resources and insights for improving VLM safety against adversarial image attacks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper presents a well-structured and comprehensive approach to VLM safety.\\n2. The dual contribution (RADAR dataset and NEARSIDE method) provides valuable resources for the research community.\\n3. The methodology is clearly articulated and technically sound.\", \"weaknesses\": [\"1. Structural Issues:\", \"The organization of supplementary materials could be improved by merging Appendix A and D to enhance readability and logical flow.\", \"2. Limited Evaluation Scope:\", \"The NEARSIDE method's evaluation is confined to the proposed RADAR dataset.\", \"Cross-dataset validation would strengthen the method's generalizability claims.\", \"3. Technical Oversight:\", \"The method's behavior under mixed adversarial-benign inputs (adversarial image + benign text) requires clarification.\", \"Comparison with multi-modal detection methods like JailGuard needs more detailed discussion.\", \"4. Minor Issues:\", \"Typographical error in Figure 6 caption (\\\"exemplar\\\")\"], \"questions\": \"1. How does NEARSIDE perform on existing adversarial image datasets beyond RADAR?\\n2. What is the method's response when processing combinations of adversarial images with benign prompts?\\n3. Could the authors elaborate on NEARSIDE's limitations compared to multi-modal detection approaches?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper addresses the vulnerability of Visual Language Models to adversarial image attacks. The authors introduce RADAR, a dataset of adversarial images with diverse harmful responses. Additionally, they propose NEAR-SIDE, a detection method that leverages a single vector derived from VLM hidden states, known as the 'attacking direction,' to distinguish adversarial images from benign ones. Experiments on the LLaVA and MiniGPT-4 models show that the method is effective, efficient, and transferable across models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well-structured and provides a clear motivation. The proposed method is straightforward, efficient, and effective, addressing a timely issue that underscores the need to develop robust VLM models moving forward. Additionally, the approach of extracting the 'attacking direction' (shifting from harmlessness to harmfulness) from the VLM embedding space is novel and clever. The introduction of the RADAR dataset is a valuable contribution.\", \"weaknesses\": [\"The method does not address adversarial attacks aimed at generating benign yet contextually altered text, which may not contain harmful content but still alters the original intent. Could the authors discuss how their approach might be extended or modified to handle such cases, where adversarial examples produce benign text that misrepresents the original context?\", \"The threat model needs further clarification. Could the authors define the assumed threat model more explicitly, specifying the attacker\\u2019s level of access, capabilities, and the defender's available resources? Including this in a dedicated section would enhance clarity, particularly around the assumed white-box access to the victim model.\", \"The method currently relies on having both adversarial and benign samples to calculate the direction and threshold. Could the authors discuss how their approach might be adapted for scenarios where only single images are available for evaluation, or clarify any limitations in these cases?\", \"Why does the method focus on the last layer of the LLM? Could the authors justify this choice, ideally through an ablation study comparing performance across different layers?\", \"While the method appears efficient based on empirical results, some visualization of the \\\"attacking direction\\\" would be helpful, particularly in the context of cross-model transferability.\", \"Only one baseline is provided for comparison, which limits the evaluation of the method's effectiveness. Could the authors explain why other relevant baselines, such as the approach in Xu et al. [1] that leverages the semantic relationship between malicious queries and adversarial images, were not included? Expanding the baseline comparisons would strengthen the evaluation in a revised version of the paper.\", \"[1] Xu, Yue, et al. 'Defending jailbreak attack in vlms via cross-modality information detector.'\\\"\", \"What if the training and testing images come from different datasets? Could the authors evaluate the robustness of their method across diverse image distributions by conducting additional experiments using separate datasets for training and testing? This would help assess the method's generalizability.\", \"**Line 478:** Summarize the results rather than simply directing readers to the appendix.\"], \"questions\": \"Check Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
EWNH3QTSxd
Which Experiences Are Influential for RL Agents? Efficiently Estimating The Influence of Experiences
[ "Takuya Hiraoka", "Guanquan Wang", "Takashi Onishi", "Yoshimasa Tsuruoka" ]
In reinforcement learning (RL) with experience replay, experiences stored in a replay buffer influence the RL agent's performance. Information about how these experiences influence the agent's performance is valuable for various purposes, such as identifying experiences that negatively influence underperforming agents. One method for estimating the influence of experiences is the leave-one-out (LOO) method. However, this method is usually computationally prohibitive. In this paper, we present Policy Iteration with Turn-over Dropout (PIToD), which efficiently estimates the influence of experiences. We evaluate how accurately PIToD estimates the influence of experiences and its efficiency compared to LOO. We then apply PIToD to amend underperforming RL agents, i.e., we use PIToD to estimate negatively influential experiences for the RL agents and to delete the influence of these experiences. We show that RL agents' performance is significantly improved via amendments with PIToD.
[ "reinforcement learning", "data influence estimation" ]
Reject
https://openreview.net/pdf?id=EWNH3QTSxd
https://openreview.net/forum?id=EWNH3QTSxd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xRbJsySUhN", "xCft7aSoLt", "wrmFoAt355", "jcB0WrhBGH", "hL8g8MrqWb", "aFO2C2VpVU", "ZrvB8Epobg", "SnrRJQAcYj", "LSCwedEw2c", "JXsqejdCas", "FiFTmbsP5m", "BwPJJ9ST2L", "A6HvUrBRy2", "9iWG6ZGBQB", "5Um5NDNW5o", "2D4cnvDilz" ], "note_type": [ "official_comment", "official_review", "meta_review", "decision", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732430209328, 1730311687971, 1733777873471, 1737523420315, 1732429963373, 1733050628593, 1730258232568, 1730455242752, 1732430308645, 1732429534139, 1732429692233, 1732875860982, 1731002401685, 1732575725481, 1732429903816, 1733050684476 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission874/Authors" ], [ "ICLR.cc/2025/Conference/Submission874/Reviewer_YodU" ], [ "ICLR.cc/2025/Conference/Submission874/Area_Chair_2brp" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission874/Authors" ], [ "ICLR.cc/2025/Conference/Submission874/Authors" ], [ "ICLR.cc/2025/Conference/Submission874/Reviewer_A595" ], [ "ICLR.cc/2025/Conference/Submission874/Reviewer_cTe3" ], [ "ICLR.cc/2025/Conference/Submission874/Authors" ], [ "ICLR.cc/2025/Conference/Submission874/Authors" ], [ "ICLR.cc/2025/Conference/Submission874/Authors" ], [ "ICLR.cc/2025/Conference/Submission874/Reviewer_YodU" ], [ "ICLR.cc/2025/Conference/Submission874/Reviewer_xR4M" ], [ "ICLR.cc/2025/Conference/Submission874/Reviewer_A595" ], [ "ICLR.cc/2025/Conference/Submission874/Authors" ], [ "ICLR.cc/2025/Conference/Submission874/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer cTe3\", \"comment\": \"Thank you for your valuable comments.\\nWe will revise the paper based on your feedback. \\n\\n**Q1.** \\nThe metric used to show self-influence appears inappropriate. As noted in L246, is in a state where it has not been trained on , so is expected to be greater than zero in most cases, regardless of whether is beneficial for learning. Additionally, since , is likely always less than zero. Thus, these metrics do not directly indicate whether the experience has a positive or negative influence on RL learning. \\n**Q1'.** \\n[About Weakness 1] You mentioned that the PI and PE metrics indicate whether a specific experience has a positive or negative effect. Could you clarify this further? The current metrics seem to only reflect whether or not learning utilized $e_i$. \\n\\n**A1.** \\nRegarding the PI and PE metrics in Section 5.1, we did not claim that \\\"the PI and PE metrics indicate whether a specific experience has a positive or negative effect.\\\" \\nIt seems there might be a difference in interpretation of the terms \\\"positive\\\" and \\\"negative.\\\" \\nIn Section 5.1, we use \\\"positive\\\" to mean a value greater than or equal to zero and \\\"negative\\\" to mean a value less than or equal to zero. \\nWe are not using \\\"positive\\\" to imply that an experience is beneficial for learning or \\\"negative\\\" to imply that it is harmful. \\n\\n\\n**Q2.** \\n[About Weakness 2] Is this approach fundamentally different from simply creating a policy ensemble through dropout and selecting the best-performing one? If the truly optimal is selected, does it necessarily mean that the experience has a negative impact?\\n\\n**A2.** \\nThis approach fundamentally differs from a simple policy ensemble in that specific policies within the ensemble are trained exclusively on specific sets of experiences. \\nIf the optimal policy has not been trained on a particular set of experiences and outperforms those that have, it indicates that the specific set of experiences has a negative impact. \\n\\n\\n**Q3.** \\n[About Weakness 3] Could you summarize the experiments in the appendix and explain what each aims to demonstrate? It would be more beneficial if these results were integrated into the main paper.\\n\\n**A3.** \\nWe will add a summary of the objectives and results of the experiments in the appendix to the main paper. \\n\\n\\n**Q4.** \\n[About Weakness 4] Could you also present the experiments mentioned above? \\n\\n**A4.** \\nWe are currently conducting comparison experiments with SAC, and we will add the results once they are complete.\"}", "{\"summary\": \"This paper presents Policy Iteration with Turn-over Dropout as a method for estimating the affect a state-action-reward-next state experience has on a policy or Q function in the area of off-policy RL. The method employs masks to dropout parameters, so that the affect of not training on an experience can be estimated without having to retrain the policy or Q values from scratch. The paper then investigates estimating various quantities including td-error and episode return.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Being able to efficiently estimate the affect a particular data point has on a network used to estimate $\\\\pi$ or $Q$ is a very powerful tool, and to my knowledge this machinery has not been applied to a Deep RL setting before. I like that the authors have aimed to evaluate it for a variety of different purposes.\\nHowever, it seems that only Section G in the appendix, and a short paragraph at the end of section 6 actually attempt to answer the question in the title of the paper.\", \"weaknesses\": \"# Major\\n\\nMore explanation in Section 4 of the PIToD method would be very helpful for a reader's understanding. \\n\\\"Thus, some readers may suspect that the parameters dropped out by $m_i$ (i.e., the parameters obtained by applying $w_i$) are not influenced by $e_i$.\\\" - This sentence caused a lot of confusion to me when reading the paper. The phrasing seems to suggest that the parameters dropped out are indeed affected, but this doesn't seem to be the case (as the reader would suspect). I am also struggling to understand Appendix Section A. Assumption 1 seems extremely strong and appears to be *almost* exactly defining the property you are looking to prove. Looking at the equality, doesn't it mean that the gradient for $i'$ is 0 for everything but $i'$ due to the indicator?\\n\\nSection C in the appendix contains a lot of interesting content.\\nThe findings from the Group Mask preliminary experiments in the appendix seem quite important - \\\"In our preliminary experiments, we found that the influence of a single experience on performance was negligibly small\\\". This should be mentioned at the very least in the main paper. \\n\\\"Key implementation decisions to improve learning.\\\" is also very important. The implementation details, and the architectural experiments you conducted should not be relegated to the appendix in this manner. Especially since Figure 10 shows they are critical to your method working.\\nAdditionally, why are you using an ensemble of 20 MLPs in the architecture in this manner? Is it important to performance? Does it interact positively with your method? More discussion on why these design decisions were made is needed, or at the very least a comment stating it was the first architectural starting point used.\\n\\nMore explanation is needed on Section 5.1 to establish the importance of the ratios being considered. What is the importance/significance of the differences in the ratios between policy eval/improvement? For Eqn 8 I can understand that the td error if higher for an experience you haven't trained on, but for Eqn 9 I do not quite see why the action chosen by $\\\\pi_{\\\\theta, w_i}$ should have a lower estimated Q.\\n\\nSection 5.2 is very misleading since you're estimating the time it would take LOO. Given that important sections that have been pushed into the appendix, I would advise replacing these with other more relevant/concrete content. \\n\\n\\\"In our setup, L_ret is estimated using Monte Carlo returns collected by rolling out policies\\\" - How many rollouts are used for each estimation?\\nAdditionally, how do you utilise Eq. 10 to identify experiences? Are you rolling out the agent multiple times to collect MC returns? Is this factored into your training budget and reflected in Figure 6?\\n\\nI don't understand why you're not showing Figure 11 in place of Figure 6? What is the particular thing you want to highlight by showing a specially picked subset of the lines in Figure 6?\\n\\nMuch more analysis and results need to be presented on characterising which experiences are harmful (or beneficial). The paper begins this kind of investigation but it feels like an afterthought. To me, this is one of the most exciting parts of the paper, utilising the tools outlined to identify what experiences are harmful for performance (with potential links to 'Ray Interference: a Source of Plateaus in Deep Reinforcement Learning'). This would be of much interest to the community.\\n\\n# Minor\\n\\n- No references for RL, and an MDP is never mentioned in the paper.\\n- No reference for the LOO estimator.\\n- CQL cited under off-policy RL in the introduction feels unnecessary.\\n- \\\"In the previous section, we demonstrated that PIToD can accurately and efficiently estimate the influence of experiences.\\\" - This is too broad a claim. In section 5 you estimated the influence an experience can have on self-influence.\", \"questions\": [\"For section G, Figure 16 shows very little change in the results when removing adversarial experiences, why is this? Figure 17 shows big differences in the estimations before and after amendments, but it would be good to clarify that your method can identify the adversarial experiences explicitly (as opposed to showing the affect of removing identified experiences which indirectly provides some evidence for this).\", \"Figure 10 shows huge changes in the results across some architectural choices, please comment more on these.\", \"(There are also some questions sprinkled throughout the above section)\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper studies how to estimate the influence of a single entry in the experience replay. The naive leave-one-out approach is too expensive and this paper proposes a novel method based on masking the network parameters. That being said, reviewers are concerned about the theoretical justification of the proposed approach. In particular, it is not clear how removing experience would affect exploration of RL algorithms (e.g., Algorithm 3). To better justify the removal of experience, it is necessary to theoretically study how different experiences contribute to exploration and how the removal of experience affects credit assignment. Those effects cannot be characterized by the scalar metric used in the paper. Moreover, to demonstrate the usefulness of the proposed technique, it is worth considering more benchmark environments and more variants of using experience replays.\", \"additional_comments_on_reviewer_discussion\": \"There is unfortunately not much discussion but reviewers are concerned about theoretical justification of the proposed approach and the significance of the results. The only positive reviewer is unable or unwilling to argue for acceptance.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer YodU\", \"comment\": \"**Q7.**\\nI don't understand why you're not showing Figure 11 in place of Figure 6? What is the particular thing you want to highlight by showing a specially picked subset of the lines in Figure 6?\\n\\n**A7.** \\nThis is because we are particularly interested in the removal of negatively influential experiences in cases where learning does not proceed well. \\nFigure 11 shows the results of removal in the average case, which includes many trials where learning is proceeding without issues, and removing negatively influential experiences has little effect. \\n\\n\\n**Q8.** \\nMuch more analysis and results need to be presented on characterising which experiences are harmful (or beneficial). The paper begins this kind of investigation but it feels like an afterthought. To me, this is one of the most exciting parts of the paper, utilising the tools outlined to identify what experiences are harmful for performance (with potential links to 'Ray Interference: a Source of Plateaus in Deep Reinforcement Learning'). This would be of much interest to the community. \\n\\n**A8.** \\nWe plan to conduct additional analysis on harmful and beneficial experiences and include these findings in the paper. \\nThank you for the reference; we will also explore its relevance and include a discussion if possible. \\n\\n\\n\\n**Q9.** \\nMinor comments\\n\\n**A9.** \\nThank you for your feedback. We will make the revisions as you pointed out. \\n\\n\\n\\n**Q10.** \\nFor section G, Figure 16 shows very little change in the results when removing adversarial experiences, why is this? Figure 17 shows big differences in the estimations before and after amendments, but it would be good to clarify that your method can identify the adversarial experiences explicitly (as opposed to showing the affect of removing identified experiences which indirectly provides some evidence for this).\\n\\n**A10.** \\nThe little change observed in Figure 16 is likely due to overlap between the masks of different experiences. \\n\\nThe ability of our method to explicitly identify adversarial experiences is discussed in the final paragraph of Appendix G (\\\"Can PIToD identify adversarial experiences?...\\\"). \\n\\n\\n**Q11.** \\nAdditionally, why are you using an ensemble of 20 MLPs in the architecture in this manner? Is it important to performance? Does it interact positively with your method? More discussion on why these design decisions were made is needed, or at the very least a comment stating it was the first architectural starting point used. \\n**Q11'.** Figure 10 shows huge changes in the results across some architectural choices, please comment more on these. \\n\\n**A11.** \\nWe initially tried directly applying dropout to individual parameters, but this approach was too unstable. \\nTo address this, we explored 50 to 100 alternative methods for stabilizing training. Among these, the ensemble-based approach we used was the most effective. It is crucial for ensuring the performance of PIToD. \\nTypically, an ensemble size of 5 is sufficient for improving performance. However, we observed that increasing the ensemble size tends to make it easier to estimate the influence of experiences. Based on this observation and our computational resources, we chose an ensemble size of 20.\"}", "{\"comment\": \"Thank you for your comments.\\nDue to the limited time available for the rebuttal, we may not be able to address all your comments, but we are working on incorporating as many of them as possible.\", \"below_are_responses_to_some_of_your_questions\": \"> A2. The added explanations are unclear: fig 3: what does it mean to \\\"fit more significantly\\\" to each experience? ...\\n\\nBy \\\"fit,\\\" we mean that applying the mask allows (i) the policy to achieve higher estimated action values and (ii) the Q-function to reduce the TD error, compared to applying the flipped mask. \\nRather than explaining this by using Fig. 3, it would be clearer to explain this by using the histogram of self-influence values. We will add this explanation and figure of the histogram. \\n\\n\\n> A6. What is the meaning of \\\"ignore the influence of individual experiences on the sampling process\\\"? Isn't that the main contribution of PIToD?\\n\\nPIToD assumes that experience sampling from the replay buffer does not depend on experiences (e.g., uniform sampling). Under this assumption, PIToD efficiently tracks the influence of experiences during policy iteration. \\nTherefore, when sampling is influenced by experiences (e.g., as in PER), it is necessary to either conduct additional discussions to address this or disregard influences on the experience sampling part.\\n\\n\\n\\nWe sincerely appreciate your detailed comments and your dedicated engagement during the author rebuttal period.\"}", "{\"summary\": \"The paper describes a novel method called Policy Iteration with Turn-over Dropout (PIToD) for excluding experiences that negatively affect the performance of RL agents when used for training via policy iteration. This includes a how to calculate the influence of a single experience on the agent's performance and how to amend the policy given this calculated influence. This is done efficiently through a parameter masking technique called turn-over dropout. The authors provide theoretical justification as to why this masking technique is similar to leaving out a specific experience. PIToD is tested on known four MuJoCo environments and shows improvement in performance for some of the environments while remaining computationally efficient.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The method is simple, novel, and original.\", \"Comparison to the leave-one-out naive approach emphasizes the significance of PIToD.\", \"Diagrams are clear and self-explanatory.\", \"Results showcase the efficiency advantages of PIToD very well\", \"the paper is generally well written with a clear narrative and a comfortable flow.\"], \"weaknesses\": [\"the theoretical justification is lacking because the assumptions (1 and 2) don't seem realistic. Basically, the authors assume that the masks enforce some sort of leave-out rule. It makes sense since they try to minimize overlap (App B), but it is the assumptions are unjustified. The authors should provide empirical evidence supporting assumptions 1 and 2, and discuss the implications if these assumptions don't fully hold in practice.\", \"missing analysis of results, mostly why the results in figures 3, 4 and 6 look the way they do. The authors should explain why the plots for some environments show lower ratios / performance than other, the reasons for instability in some cases (including shaded confidence intervals). Figure 4 is explained but this explanation is unclear. Specifically, it is unclear why this figure suggests that the agent is overfitting to older experiences.\", \"figure 4 heatmap scales are all different, making it very difficult to read and compare the graphs. Either use consistent scales for all plots or at least keep colors consistent regardless of the scale, i.e., if 1 is yellow and -1 is blue inn one scale, then in another scale -1 is still blue, and -4 can be a new color, e.g., green.\", \"does not consider what happens if the buffer is full, something that will eventually happen if training persists. The authors should provide a more intuitive explanation or a simple example that illustrates why the signs of these equations indicate correct influence calculations.\", \"no comparison to other methods. E.g., PER is definitely comparable.\", \"redundant mini-paragraphs (start of sections) and parentheses (introduction) that map out the content of the paper are distracting and ruin the flow of reading.\"], \"questions\": [\"Could I use PIToD and PER together?\", \"Your algorithm as many loops. Can these ToD iterations be efficiently batched?\", \"it is unclear why equations 8 and 9 indicate correct influence calculations if they are positive and negative, respectively. Why is this the case?\", \"why is the \\\"correct experience ratio\\\" of hopper for in policy improvement so much worse than the others, and why does the ratio for walker and ant decrease throughout the epochs? The authors should discuss potential reasons for these differences and what implications they might have for the applicability of PIToD across different environments.\", \"why is bias mainly an issue with the humanoid environment and not the others? Is this consistent with previous findings in the literature? What specific characteristics does the humanoid environment have that might contribute to this bias issue?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes the PIToD method to efficiently estimate which experiences positively or negatively impact RL learning. The proposed approach sets a drop-out mask for each experience and estimates the influence of each experience based on this mask and its complement, allowing for significantly more efficient computation compared to traditional Leave-One-Out (LOO) methods. The authors demonstrate that their method accurately estimates the influence of experiences using various metrics and further show, through experiments, that applying it to SAC improves learning performance.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. A novel approach is presented for estimating the influence of specific experiences in RL by using Turn-over Dropout (ToD).\\n2. It is theoretically demonstrated that the complement mask $w\\\\_i$ for experience $e\\\\_i$ indicates an absence of influence from $e\\\\_i$.\", \"weaknesses\": \"1. The metric used to show self-influence appears inappropriate. As noted in L246, $Q\\\\_{w\\\\_i}$ is in a state where it has not been trained on $e\\\\_i$, so $L\\\\_{pe,i}(Q\\\\_{w\\\\_i}) \\u2212 L\\\\_{pe,i}(Q\\\\_{m\\\\_i})$ is expected to be greater than zero in most cases, regardless of whether $e\\\\_i$ is beneficial for learning. Additionally, since $\\\\pi\\\\_{m\\\\_i} = \\\\arg\\\\max\\\\_{\\\\pi} L\\\\_{pi,i}(\\\\pi)$, $L\\\\_{pi,i}(\\\\pi\\\\_{w\\\\_i}) \\u2212 L\\\\_{pi,i}(\\\\pi\\\\_{m\\\\_i})$ is likely always less than zero. Thus, these metrics do not directly indicate whether the experience has a positive or negative influence on RL learning.\\n2. The evaluation shown in Figure 6 also seems flawed. In Algorithm 4 of Appendix D, $w^*$ is already defined as $\\\\arg\\\\max\\\\_w L\\\\_{ret}(\\\\pi, w)$ , so it is unsurprising that high returns are achieved.\\n3. The main paper contains too few experimental results. While it seems that several experiments were conducted in the appendix, summarizing the purpose and outcomes of these experiments in the main paper would enhance clarity.\\n4. It would be beneficial to include a direct comparison of mean performance between the original SAC method and the SAC method with PIToD in the main paper.\", \"questions\": \"1. **[About Weakness 1]** You mentioned that the PI and PE metrics indicate whether a specific experience has a positive or negative effect. Could you clarify this further? The current metrics seem to only reflect whether or not learning utilized\\u00a0$e_i$.\\n2. **[About Weakness 2]** Is this approach fundamentally different from simply creating a policy ensemble through dropout and selecting the best-performing one? If the truly optimal\\u00a0 $w^*$\\u00a0 is selected, does it necessarily mean that the experience has a negative impact?\\n3. **[About Weakness 3]** Could you summarize the experiments in the appendix and explain what each aims to demonstrate? It would be more beneficial if these results were integrated into the main paper.\\n4. **[About Weakness 4]** Could you also present the experiments mentioned above?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics review needed.\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer xR4M\", \"comment\": \"Thank you for your valuable comments.\\nWe will revise the paper based on your feedback. \\n\\n**Q1.** \\nThe paper do not have a theoretical underpinning for their approach though some reader may find the idea intuitive, that said, personally, I'm not a fan of removing samples from experience buffers; \\n\\n**A1.** \\nWe provide theoretical analyses in Appendix A to justify the use of PIToD.\\n\\n\\n**Q2.** \\nbecause the way I see it, deleting \\\"negatively influential experiences\\\" seems to be a lenient/hysteresis update commonly seen in optimistic approaches, it may be useful in same case but should be used with caution since it may hinder learning and cause bias; a comparison with similar approaches is not included. I think instead of removing samples, the safer approach is to use gradient clipping or learning rate schedules to stabilize training. Techniques like gradient clipping cap the maximum value of gradients to prevent instability without losing any data --\\n\\n**A2.** \\nOur method (PIToD) is primarily designed to identify and remove experiences that hinder learning or introduce bias. \\n\\nAdditionally, the main objective of our work is not to stabilize training but to estimate the influence of experiences. \\nIn the context of online reinforcement learning, our study is the first attempt to estimate the influence of experiences and, to the best of our knowledge, no similar studies (approaches) exist. \\n\\n\\n\\n**Q3.** \\nData removal in RL may massively hinder exploration, especially in non-linear environments, speaking of which --\\n\\n**A3.** \\nIn our experiments, data removal is not performed during training (c.f. Algorithm 4). Therefore, in our experiments, data removal does not hinder exploration.\\n\\n\\n**Q4.** \\nthe paper only evaluate on classic mujoco, and lacks variety of evaluation task, speaking of evaluation --\\n\\n**A4.** \\nIn our paper, we conduct experiments on 4 Mujoco tasks and 8 DMC tasks (Section 6 and Appendix G).\\nThe number of tasks would be comparable to previous works accepted at ICLR (e.g., [1]). \\n\\n[1] https://openreview.net/forum?id=PczQtTsTIX\\n\\n\\n**Q5.** \\nThe evaluation also lacks other sota methods for comparison, since the promise was to have better performance, rather than better theoretical understanding, which goes back to my first point. \\n\\n**A5.** \\nWe provide theoretical analyses (c.f. A1). \\nAdditionally, since our work is the first to address this type of study in online RL, there are no existing SOTA methods for comparison (c.f. A2). \\n\\n\\n**Q6.** \\nHow does sample rejection based on gradient relate to hessian? \\n\\n**A6.** \\nWe may not fully understand your question. Are you asking whether our method (PIToD) relates to existing research (e.g., [2]), which uses Hessian-based approaches for influence estimation in a supervised learning setting?\\n\\n[2] https://arxiv.org/pdf/1703.04730\"}", "{\"title\": \"Response to Reviewer A595\", \"comment\": \"Thank you for your valuable comments.\\nWe will revise the paper based on your feedback. \\n\\n**Q1.** \\nthe theoretical justification is lacking because the assumptions (1 and 2) don't seem realistic. Basically, the authors assume that the masks enforce some sort of leave-out rule. It makes sense since they try to minimize overlap (App B), but it is the assumptions are unjustified. The authors should provide empirical evidence supporting assumptions 1 and 2, and discuss the implications if these assumptions don't fully hold in practice. \\n\\n**A1.** \\nFor Assumption 2, we ensure in our implementation that there is no overlap between the masks $m_i$ and the flip masks $w_i$. Therefore, we believe this assumption holds in practice. \\n\\nOn the other hand, Assumption 1 may not be fully satisfied in our current implementation. If Assumption 1 is not strictly met, it means that applying the flip mask corresponding to a specific experience may not entirely exclude the influence of that experience. To fully satisfy Assumption 1, one potential implementation would involve completely eliminating any overlap between the masks $m_i$ and $m_j$ ($i \\\\neq j $). However, this approach would require ignoring interactions between experience groups, which is impractical. For this reason, we did not adopt such a method in our implementation. \\nIn summary, there is a trade-off between strictly meeting Assumption 1 and maintaining practicality. Research and development on methods and implementations that balance these factors remain future work. \\nWe will add a discussion of this point in Appendix I. \\n\\n\\n\\n\\n**Q2.** \\nmissing analysis of results, mostly why the results in figures 3, 4 and 6 look the way they do. The authors should explain why the plots for some environments show lower ratios / performance than other, the reasons for instability in some cases (including shaded confidence intervals). Figure 4 is explained but this explanation is unclear. Specifically, it is unclear why this figure suggests that the agent is overfitting to older experiences.\\n\\n**A2.** \\nRegarding the explanation of Figure 4, the phrase \\\"overfitting to older experiences\\\" may have been an overstatement. We have revised the manuscript to clarify that self-influence is concentrated on older experiences, without implying overfitting. \\nFor the other points (Figures 3 and 6) you mentioned, we will include additional explanations and analyses in the revised manuscript. \\n\\n\\n**Q3.** \\nfigure 4 heatmap scales are all different, making it very difficult to read and compare the graphs. Either use consistent scales for all plots or at least keep colors consistent regardless of the scale, i.e., if 1 is yellow and -1 is blue inn one scale, then in another scale -1 is still blue, and -4 can be a new color, e.g., green. \\n\\n**A3.** \\nWe are currently revising the figure to address this issue. \\n\\n\\n**Q4.** \\ndoes not consider what happens if the buffer is full, something that will eventually happen if training persists. \\n\\n**A4.** \\nIf computational resources allow, we will conduct experiments to address this scenario. \\n\\n\\\\# In PIToD, even if the buffer becomes full and older experiences are overwritten by newer ones, this does not pose a fundamental problem. However, if we want to estimate the influence of the overwritten experiences later, it is necessary to record the corresponding masks (or the random seeds used to generate those masks). \\n\\n\\n\\n\\n**Q5.** \\nno comparison to other methods. E.g., PER is definitely comparable. \\n\\n**A5.** \\nWe are currently conducting experiments with SAC. Once those are complete, we will add experiments with SAC+PER as well. \\n\\n**Q6.** \\nCould I use PIToD and PER together? \\n\\n**A6.** \\nPER performs weighted sampling of experiences based on TD errors evaluated with the Q-function ([1], Algorithm 1, line 11). \\nIf it is acceptable to ignore the influence of individual experiences on the sampling process, then PIToD and PER can be used together. \\nIn such a case, combining the two methods would involve replacing PIToD's uniform sampling scheme with PER's prioritized sampling scheme. \\n\\n[1] https://arxiv.org/pdf/1511.05952\\n\\n\\n**Q7.** \\nYour algorithm as many loops. Can these ToD iterations be efficiently batched?\\n\\n**A7.** \\nFor line 8 of Algorithm 2, when using $L_{pe, i} $, $L_{pi, i}$, and $L_{bias, i}$, the operations are performed in batches. \\nThese batch operations are implemented in the provided code under `redq/utils/bias_utils.py` in the supplemental material. \\n$L_{ret}$ involves interaction with the Mujoco (CPU) environment for return calculation, so batching is not applied in this case.\"}", "{\"title\": \"Response to Reviewer A595\", \"comment\": \"**Q8.**\\nThe authors should provide a more intuitive explanation or a simple example that illustrates why the signs of these equations indicate correct influence calculations. \\n**Q8'.** \\nit is unclear why equations 8 and 9 indicate correct influence calculations if they are positive and negative, respectively. Why is this the case? \\n\\n**A8.** \\nThe explanation for the reason is provided in the second paragraph of Section 5.1: \\\"We evaluate whether PIToD has correctly estimated the influence of experiences by examining the signs (positive or negative) of the values of Eq. 8 and Eq. 9....\\\" \\n\\nThe policy and Q-function trained with the mask are optimized to maximize $L_{pi, i}$ and minimize $L_{pe, i}$, respectively (c.f. lines 5 and 6 in Algorithm 2). \\nIn contrast, the policy and Q-function trained with the flip mask are not optimized in this way. Therefore: \\n- The sign of Eq. 8 is positive because the mask reduces the TD error more significantly. \\n- The sign of Eq. 9 is negative because the mask results in higher action values. \\n\\n\\n**Q9.** \\nwhy is the \\\"correct experience ratio\\\" of hopper for in policy improvement so much worse than the others, and why does the ratio for walker and ant decrease throughout the epochs? The authors should discuss potential reasons for these differences and what implications they might have for the applicability of PIToD across different environments.\\n\\n**A9.** \\nFor policy improvement, it appears that environments with higher-dimensional action and state spaces tend to have higher ratios (Hopper < Walker == Ant < Humanoid). This suggests that higher-dimensional spaces might allow the policy function to better discriminate between experiences. \\n\\nAs for the decreasing ratio in some environments, this could be due to an increasing proportion of similar experiences in the replay buffer as epochs progress, or an increase in the variety of experiences relative to the network (mask) size, making it harder to differentiate individual experiences. \\n\\nWe have added this discussion to the main text. \\n\\n\\n\\n\\n**Q10.** \\nwhy is bias mainly an issue with the humanoid environment and not the others? Is this consistent with previous findings in the literature? What specific characteristics does the humanoid environment have that might contribute to this bias issue?\\n\\n**A10.** \\nIn the Humanoid environment, Q-functions are more prone to overestimation, making bias a more significant issue. \\nThis trend is consistent with findings in previous research (e.g., Figure 14 in [1]). \\nGenerally, tasks with higher-dimensional state and action spaces tend to exhibit this type of problem more prominently. \\n\\n[1] https://arxiv.org/pdf/2110.02034\"}", "{\"title\": \"Thanks for your reply\", \"comment\": \"Based on your replies to mine and other reviews I am inclined to keep my score.\\nThere's a lot of feedback and suggestions in the reviews that can be used to strengthen the paper.\\n\\n1. The overly strong assumptions make the contribution of the theory much weaker, and these should be highlighted more strongly in the paper so the reader is aware of the disconnect between a practical implementation and the theoretical results. \\n2. Please do make clear the additional budget used for estimating the MC returns, and include it in the graphs so a reader can easily tell how many environmental experiences your method is using.\\n3. \\\"This is because we are particularly interested in the removal of negatively influential experiences in cases where learning does not proceed well.\\\" - make this clear in the main paper. Especially that the results look quite different in the average case.\\n4. \\\"To address this, we explored 50 to 100 alternative methods for stabilizing training.\\\" - I realise that you cannot possibly include all of these, but practitioners in this area would benefit from reading about your specific experiences with what worked/what didn't work. To me, it seems like the more empirical side of the paper has a lot of work that is not presented/not highlighted which is a shame since its very important to getting the method to work well.\"}", "{\"summary\": \"This paper aims to study the influence individual experience sample have in training RL agents. The authors used identify masks to differentiate individual samples and observe the difference in the resulting Q values.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"This paper focus on experience replay, that is, sampling distribution manipulation, which I think is a under-represented direction in RL research.\", \"I like the the fact that ToD being applied in the sample consideration, the usage of ToD feels natural and justified for this use case.\"], \"weaknesses\": [\"The paper do not have a theoretical underpinning for their approach though some reader may find the idea intuitive, that said, personally, I'm not a fan of removing samples from experience buffers;\", \"because the way I see it, deleting \\\"negatively influential experiences\\\" seems to be a lenient/hysteresis update commonly seen in optimistic approaches, it may be useful in same case but should be used with caution since it may hinder learning and cause bias; a comparison with similar approaches is not included. I think instead of removing samples, the safer approach is to use gradient clipping or learning rate schedules to stabilize training. Techniques like gradient clipping cap the maximum value of gradients to prevent instability without losing any data --\", \"Data removal in RL may massively hinder exploration, especially in non-linear environments, speaking of which --\", \"the paper only evaluate on classic mujoco, and lacks variety of evaluation task, speaking of evaluation --\", \"The evaluation also lacks other sota methods for comparison, since the promise was to have better performance, rather than better theoretical understanding, which goes back to my first point.\"], \"questions\": \"How does sample rejection based on gradient relate to hessian?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"A1. the justification for assumption 2 seems valid. assumption 1 still seems unrealistic and a more in-depth analysis of the aforementioned tradeoff is required. furthermore, no discussion was added to the appendix.\\n\\nA2. The added explanations are unclear:\", \"fig_3\": \"what does it mean to \\\"fit more significantly\\\" to each experience? can you provide an explanation as to why this is happening, other than \\\"high dimensions\\\"?\", \"fig_4\": \"(a) It is still unclear why these phenomena happen and no explanation / hypothesis is provided. (b) if there is no pattern then what is the significance of these results?\", \"fig_6\": \"No additional explanation was found. was any new information on this figure added to the text? if so, where?\\n\\nA3. the figure still requires fixing\\n\\nA4. Though filling up the buffer does not pose a fundamental problem in running PIToD, it is still a concern that new, bad experiences can replace old, good experiences. can PIToD be used to help choose which experiences to discard? further analysis is required.\\n\\nA5. experiment results were not yet added to the paper\\n\\nA6. what is the meaning of \\\"ignore the influence of individual experiences on the sampling process\\\"? isn't that the main contribution for PIToD?\\n\\nA7. very interesting. thank you for your response\\n\\nA8. this explanation simply states how the network was optimized, but it is still unclear what \\\"correct\\\" means in this context and why this correctness corresponds to the sign of these equations.\\n\\nA9. What is the intuition behind this explanation? why does this suggest that \\\"higher-dimensional spaces might allow...\\\". The explanation about the decreasing ratio makes sense.\\n\\nA10. This is an important point to understand when reading this plot. please add this citation with a small explanation in the main text (preferably direct the reader to figure 14 directly)\"}", "{\"title\": \"Response to Reviewer YodU\", \"comment\": \"Thank you for your valuable comments.\\nWe will revise the paper based on your feedback. \\n\\n**Q1.** \\nMore explanation in Section 4 of the PIToD method would be very helpful for a reader's understanding. \\\"Thus, some readers may suspect that the parameters dropped out by (i.e., the parameters obtained by applying ) are not influenced by .\\\" - This sentence caused a lot of confusion to me when reading the paper. The phrasing seems to suggest that the parameters dropped out are indeed affected, but this doesn't seem to be the case (as the reader would suspect). \\n\\n**A1.** \\nWe intended to use \\\"suspect\\\" in the sense of \\\"think\\\" or \\\"believe,\\\" rather than \\\"doubt,\\\" which may have caused some confusion. \\nThe parameters dropped out by the mask are, in theory, not influenced by the corresponding experience. \\nWe have revised the sentence in Section 4 to ensure that this point is communicated clearly and unambiguously. Thank you for highlighting this potential misunderstanding. \\n\\n\\n**Q2.** \\nI am also struggling to understand Appendix Section A.... Looking at the equality, doesn't it mean that the gradient for $i'$ is 0 for everything but $i'$ due to the indicator? \\n\\n**A2.** \\n\\n> Assumption 1 seems extremely strong and appears to be almost exactly defining the property you are looking to prove. \\n\\nAssumption 1 may be a strong assumption that is difficult to reconcile with practical implementations. \\nAs mentioned in response A1 to Reviewer A595, strictly satisfying this assumption in practical implementations is not easy at this stage. i.e., Prioritizing practicality means that fully meeting this assumption remains challenging, and resolving this gap is a topic for future work. \\n\\nAssumption 1 essentially assumes that the Q-function and policy function using $w_i'$ can be replaced by functions dominantly influenced by the experience $e_i'$. \\nOn the other hand, what we aim to prove is that, when training under the PIToD framework, applying the flip mask $w_i'$ corresponding to $e_i$ enables us to isolate parameters unaffected by $e_i$. \\n\\n> Looking at the equality, doesn't it mean that the gradient for is 0 for everything but due to the indicator?\\n\\nYes, that is correct.\\n\\n\\n**Q3.** \\nSection C in the appendix contains a lot of interesting content. The findings from the Group Mask preliminary experiments in the appendix seem quite important - \\\"In our preliminary experiments, we found that the influence of a single experience on performance was negligibly small\\\". This should be mentioned at the very least in the main paper. \\\"Key implementation decisions to improve learning.\\\" is also very important. The implementation details, and the architectural experiments you conducted should not be relegated to the appendix in this manner. Especially since Figure 10 shows they are critical to your method working. \\n\\n**A3.** \\nAs per your suggestion, we plan to include several key insights and findings in the main paper.\\n\\n\\n**Q4.** \\nWhat is the importance/significance of the differences in the ratios between policy eval/improvement? For Eqn 8 I can understand that the td error if higher for an experience you haven't trained on, but for Eqn 9 I do not quite see why the action chosen by $\\\\pi_{\\\\theta, w_i}$ should have a lower estimated Q. \\n\\n**A4.** \\n$\\\\pi_{\\\\theta, m_i}$ is trained to maximize the estimated Q (c.f. Algorithm 2, line 6), whereas $\\\\pi_{\\\\theta, w_i}$ is not trained in the same way. \\nAs a result, $\\\\pi_{\\\\theta, w_i}$ is expected to have a lower estimated Q.\\n\\n\\n**Q5.** \\nSection 5.2 is very misleading since you're estimating the time it would take LOO. Given that important sections that have been pushed into the appendix, I would advise replacing these with other more relevant/concrete content. \\n\\n**A5.** \\nThank you for pointing this out. To address your concern, we are considering replacing this section with other (e.g., Appendix C). \\nAdditionally, we will explicitly clarify in the text that the LOO time presented in this section is based on an estimated calculation. \\n\\n\\n\\n**Q6.** \\n\\\"In our setup, $L_{ret}$ is estimated using Monte Carlo returns collected by rolling out policies\\\" - How many rollouts are used for each estimation? Additionally, how do you utilise Eq. 10 to identify experiences? Are you rolling out the agent multiple times to collect MC returns? Is this factored into your training budget and reflected in Figure 6? \\n\\n**A6.** \\nFor each group of experiences included in the replay buffer, the agent is rolled out for 10 test episodes to estimate the MC return. Using these estimates, Eq. 10 is calculated to estimate the influence of each experience group. \\n\\nThe experience group corresponding to the flip mask with the highest Eq. 10 value is identified as having a negative influence. \\n\\nThe additional budget required for influence estimation is not reflected in Figure 6 but is shown in Figure 13.\"}", "{\"comment\": \"Thank you for your comments and your engagement during the author rebuttal period.\\nWe will do our best to incorporate your suggestions into the next revision.\"}" ] }
EWKPEtwjTy
A Discrete Actor and Critic for Reinforcement Learning on Continuous Tasks
[ "Jundong Zhang", "Tianqi Wei" ]
Solving continuous reinforcement learning (RL) tasks typically requires models with continuous action spaces, as discrete models face challenges such as the curse of dimensionality. Inspired by discrete controlling signals in control systems, such as pulse-width modulation, we investigated RL models with discrete action spaces with performance comparable to continuous models on continuous tasks. In this paper, we propose an RL model with a discrete action space, designed a discrete actor that outputs action distributions and twin discrete critics for value distribution estimation. We also developed both the training method and exploration strategy for this model. The model successfully solved BipedalWalkerHardcore-v3, a continuous robot control task in a complex environment, achieved a higher score than the state-of-the-art baselines and comparable results across various other control tasks.
[ "reinforcement Learning", "discrete action space", "continuous control", "bipedal locomotion" ]
Reject
https://openreview.net/pdf?id=EWKPEtwjTy
https://openreview.net/forum?id=EWKPEtwjTy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sRvMjfVCBV", "ZBSdNXzpPb", "YlA16KQU1x", "RKkxz11KaW", "KinmoOjfvt", "48efaiU92K" ], "note_type": [ "meta_review", "official_review", "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1734755134984, 1730737192791, 1737524066877, 1730044671575, 1730279851991, 1730568540268 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10630/Area_Chair_oPVG" ], [ "ICLR.cc/2025/Conference/Submission10630/Reviewer_TGuu" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10630/Reviewer_PyGc" ], [ "ICLR.cc/2025/Conference/Submission10630/Reviewer_brmt" ], [ "ICLR.cc/2025/Conference/Submission10630/Reviewer_rmKX" ] ], "structured_content_str": [ "{\"metareview\": \"The paper aims at addressing continuous control through discretization of actions. The motivation is that in PWM-based control systems where discrete changes in voltage produce continuous control of current. Based on TD3, the authors use distributional RL to learn the critic.\\n\\nWhile I understand the benefits of discretizing actions, for example, for maintaining multimodal distributions, the paper has to go a long way to clearly describe the benefits of this approach. One good way is to pose a research problem that can be motivated from first principles. For example, the authors could start by discussing the drawbacks of maintaining a continuous distribution and the advantages of maintaining discrete actions for learning policies. Positioning the paper in terms of a research problem helps both the authors and the readers to understand what the closest baseline algorithms would be. For example, is the inability of representing multimodal distributions through normal policies the main issue we are addressing in this paper? Then a class of other approaches becomes relevant such as Normalizing flows, diffusion processes, and mixtures of Gaussian, as mentioned by a reviewer. I strongly suggest rephrasing the work in terms of a solid research question for the next submission.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers also found the work to be undermotivated, lacking a central research question. Moreover, there are other issues as well such as chosen environments being too simple or not motivating different parts of the algorithm proposed. The authors did not respond during the rebuttal phase.\"}", "{\"summary\": \"The paper proposes a reinforcement learning algorithm that discretize continuous action spaces and combines TD3 with C51. The authors demonstrate that their model can achieve state-of-the-art performance on the BipedalWalkerHardcore-v3 task and competitive results on various MuJoCo tasks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"the trap and cheese problem is a nice illustration of the limitation of Gaussian policies\"], \"weaknesses\": [\"The paper needs a major reviewing, it is full of typo\", \"L36/L37 \\\"However, by varying the ratio of the time discrete values are presented\\\"\", \"L48 \\\"With idea, we proposed a model with discrete action\\\"\", \"\\\"In contrast, our discrete can learn the correct strategy\\\"\", \"Even though I understood the general idea of the paper, it is very hard to read and needs to be deciphered.\", \"\\\"For continuous tasks, such as motion control tasks, evaluating all possible action is not possible, hence RL models with discrete action space can suffer from the curse of dimensionality\\\". The start of the sentence talks about continuous action space and the conclusion is about the curse of dimensionality of discrete action space methods...\", \"\\\"We investigated RL models with discrete action spaces with performance comparable to continuous models on continuous tasks\\\"\", \"The paper cites relevant baselines but do not compare with them nor explain why. It should be compared to baselines that also go beyond simple Gaussian policies like [1].\", \"The combination of C51 with TD3 is quite an incremental contribution.\", \"[1] Tang, Y., & Agrawal, S. (2020). Discretizing Continuous Action Space for On-Policy Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5981-5988. https://doi.org/10.1609/aaai.v34i04.6059\"], \"questions\": \"1. How much multi-modal are the trained policies?\\n\\n2. Are you sure that equation (7) is better than simply maximizing the expected Q value? \\n\\n3. Why Fig. 7 contains only the results of the proposed method?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper proposes a discrete actor-critic method that discretizes continuous action spaces via a decoupled actor and leverages distributional double Q-learning for stable training. The authors motivate the approach via pulse-width-modulation and apply their approach with a fine-grained action distribution of 51 bins per dimension on several benchmark tasks against a selection of continuous control baselines. The approach performs favorably, indicating that discretized control may be a promising alternative for domains traditionally solved with continuous policies, e.g., robotics.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The paper investigates an interesting problem with real-world implications for exploration and learning in continuous robot control tasks.\", \"The procedure for clipped distributional double Q-learning and it's impact on performance in Figure 6 is an interesting insight.\", \"Comparison to strong baseline agents on a variety of tasks is helpful to put the proposed method into perspective\"], \"weaknesses\": [\"The paper investigates the interesting topic of discretizing continuous control tasks to facilitate efficient learning control, but overlooks a line of key related work. Decoupled distributional critics for discretized control were shown to be competitive with continuous actor-critic approaches in [1], with prior motivation for PWM-type bang-bang control in [2] and decoupled Q-learning approaches for single agent control in [3-4], with additional application to fine-grained discretization as well as hardware control [5-6].\", \"The method presented here uses multiple concept also presented/employed in references [1-6] (and references therein). Without a rigorous discussion, it is difficult to judge which parts of the presented method may qualify for novel contribution.\", \"Figure 1 appears to be a slight modification without attribution of the PWM figure from Wikipedia (https://en.wikipedia.org/wiki/Pulse-width_modulation#/media/File:PWM,_3-level.svg) - could you clarify how this figure was created?\", \"Table 3 indicates a 51-bin discretization per action dimension. The transition from the PWM motivation (=2/3 bins) to this fine-grained discretization could be smoother.\", \"Additional proofreading would be beneficial (e.g., line 316: \\u201c\\u2026 is same to in Equ equation 8 ,\\u201d ; line 390: \\u201cFigure ??\\u201d)\", \"The \\u201cTrap or Cheese\\u201d experiment would profit from additional quantitative as well as qualitative analysis. It is not immediately clear why SAC would average options and not commit to one of the modes in an RL setting in practice (e.g., always select left / always select right).\", \"The SAC / TD3 / TQC baselines from Table 2 should be added to Figure 7. Are results in Table 2 compared at the same #frames for each algorithm? For example, TQC runs the Ant - Walker tasks for 5/5/3/10/5 x 1e6 max frames in their Figure 5, while Figure 7 here runs experiments for more than 1e7 frames.\", \"Kuznetsov et al. use OpenAI Gym-v3 environments, vs the experiments in Figure 7 use Gym-v5. It should be justified why these versions are 1-to-1 comparable (if they are).\", \"**References:**\", \"[1] T. Seyde, P. Werner, W. Schwarting, I. Gilitschenski, M. Riedmiller, D. Rus, and M. Wulfmeier. \\\"Solving continuous control via q-learning.\\\"\\u00a0ICLR, 2023.\", \"[2] T. Seyde, I. Gilitschenski, W. Schwarting, B. Stellato, M. Riedmiller, M. Wulfmeier, and D. Rus. \\\"Is bang-bang control all you need? solving continuous control with bernoulli policies.\\\"\\u00a0NeurIPS, 2021.\", \"[3] A. Tavakoli, F. Pardo, and P. Kormushev. \\\"Action branching architectures for deep reinforcement learning.\\\" AAAI, 2018.\", \"[4] A. Tavakoli, M. Fatemi, and P. Kormushev. \\\"Learning to represent action values as a hypergraph on the action vertices.\\\"\\u00a0ICLR, 2021.\", \"[5] D. Ireland, and G. Montana. \\\"Revalued: Regularised ensemble value-decomposition for factorisable markov decision processes.\\\"\\u00a0ICLR, 2024.\", \"[6] Y. Seo, J. Uru\\u00e7, and S. James. \\\"Continuous control with coarse-to-fine reinforcement learning.\\\"\\u00a0CoRL, 2024.\"], \"questions\": [\"See also weaknesses above for implicit questions\", \"How many seeds were the experiments in Figures 6 and 7 averaged over?\", \"Is the impact of removing the double critic not a bit concerning w/r/t training stability? A comparison to the impact of single vs double critic on other baselines would be insightful.\", \"What is the motivation for randomly selecting actions that have not been selected for the longest time during half of the episodes? Did this improve performance over only using the entropy exploration method?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work presents a discrete-actor for continuous control tasks.\\nThey show competitive performance on a range of standard benchmark environments.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Overall the method looks promising.\\nThere is a good motivation for using discrete representations, especially around improved exploration in large continuous spaces.\", \"weaknesses\": \"Experiments focus on simple environments. The major challenges occur with large action spaces and where the differences in run-time, sample efficiency, etc... become more noticible.\\nFor example, humanoids such as in Adversarial Motion Priors or Perpetual Humanoid Controller, have up to 70 degrees of freedom.\", \"questions\": \"1. How would this method compare with very large action spaces?\\n\\n2. It seems there's no structure in the action space. One strong benefit of continuous representations is that there is a notion of close-ness and order. Could this be applied here? Why not do so?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a technique to deal with discredited continuous high-dimensional action spaces.\\nTo overcome the curse of dimensionality, the authors choose a factorized policy representation, where each action dimension is discretized, and use distributional RL to learn the critic and improve the policy.\\n\\nDiscretized policies can capture multimodal distributions (in contrast to more classic Gaussian policies), providing a more effective exploration.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"Whether actions should be represented with discrete or continuous variables is very compelling. Each of the two representation have their own advantages and disadvantages. Discrete actions, in contrast with continuous ones, for example, naturally allow for multimodal exploration. On the other hand, continuous actions more easily allow for generalization, and they do not suffer particularly the curse of dimensionality (which is why they have been so largely employed in RL).\\n\\nOriginality\\n--------------\\n\\nThe algorithm proposed is novel. \\n\\nQuality and Clarity\\n-------------------------\\n\\nThe method is described clearly.\\n\\nSignificance\\n---------------\\n\\nThe paper has the potential to address a very interesting question about whether (or when) discretizing actions is convenient.\", \"weaknesses\": \"Main Weaknesses\\n------------------------\", \"this_paper_lacks_a_central_research_question\": \"this is a real issue because I cannot understand what the authors are trying to achieve with their algorithm. I think this could be solved by clearly stating what is the problem that the authors want to solve (or mitigate) or what is the main research question.\\n\\nFrom the abstract, it seems that the authors want to develop a novel action discretization mechanism, and they mention that they draw inspiration from Pulse-With-Modulation (PWM) from classic control. But in the method, nothing resembles PWM (unless I've missed the connection).\", \"discretization_does_not_play_a_central_role_in_the_paper_either\": \"the discretization is provided as hyperparameters, and what the authors propose is how to represent the new discrete action space (in particular, with a rather classic soft-max parametrization).\\n\\nSo, I am left unsure of what the real contribution of the paper is. \\n\\nIn the paper's conclusion, the authors highlight that their method provides a multimodal exploration, but my question is: do we need to discretize the action space to achieve it? Normalizing flows, diffusion processes, and mixtures of Gaussian (as the output of the policy network) achieve that as well... Discrete distributions are, by definition, multimodal (one mode for discrete action)... what is special in it?\", \"i_am_also_confused_by_the_algorithm_proposed_by_the_authors\": \"it is unclear why they chose distributional RL versus classic RL... I have nothing against distributional RL (quite the opposite), but I don't understand if this choice has a particular reason. It is the same about double Q-learning and clipping values (which are techniques to prevent the overestimation of the Q-function)... What is their role in this paper? The same goes for the entropic bonus defined in Eq. 11.\\n\\nIn summary, the proposed method seems like a \\\"collection\\\" of methods put together but without a clear objective. Perhaps the paper will become clearer when the authors better define their goal or research question. \\n\\nExperiments\\n-----------------\\n\\n**Trap-Or-Cheese Problem**: \\\"Models averaging good actions can result in a bad action.\\\"... I would say this is a true statement, but that is not due to continuous policies but rather to how the distribution is defined. Gaussian policies struggle in the presented situation because they have only one mode. That is not to say we need to discretize the action space to make the policy multimodal; other methods can be used. Furthermore, comparing SAC with the proposed algorithm is not meaningful, as the proposed algorithm uses distributional RL while SAC does not. (P.S., DQN, PPO, etc, can handle discrete action spaces too, thus, multimodality). **I can't see in the paper the results of this experiment**\\n\\n**Bipedal Walker** How many discretized actions were used? \\n\\n**Mujoco** What are the confidence intervals of Table 2? What do I see in figure 7?? What is the difference between gray and black lines?\\n\\n**In general, the algorithms' hyperparameters (neural network site, discretization, learning rates..) are completely missing.*\\n\\nJustification for the grades\\n-----------------------------------\\n\\n**Soundess: Fair**. It is really hard to judge if the paper is sound because the objective of the paper is unclear.\\n**Presentation: Fair **. Even though the paper is generally well written in terms of English/grammar, and the method is explained somewhat clearly, the presentation is not generally good because of the lack of a main research question, making it difficult for the reviewer to judge the method, and to make a \\\"take-home message\\\".... What did I learn reading this paper?\\n**Contribution: Poor.** As of now, I do not see a novel contribution in this paper: action discretization has already been explored in the community with better methods (in this paper, the action discretization comes as a hyperparameter); I do not see any novelty relevant in the use of distributional reinforcement learning and in the modified entropic objective (entropic bonuses are quite used in standard RL, see SAC). \\n\\n\\nMinor Typos / Unclear Points (Not relevant for the scores of this review)\\n----------------------------------------------------------------------------------------------\\n\\n* Line 316, \\\"Equ equation\\\"\\n* Line 390, \\\"Figure ??\\\"\\n* Figure 6: are shaded areas in STD or confidence intervals?\\n* Line 198: It does not make sense to say that the elements of a set are independent. Dependency has a statistical meaning. For example, one can define a policy where the different dimensions of an action are sampled independently (as it would happen with a Gaussian with diagonal covariance), or they could be dependent (as it would happen with a Gaussian with non-diagonal covariance).\", \"questions\": \"1. What is the main research question of the paper? What do you try to achieve?\\n2. What do you do to achieve your research goal? How do the outlined components of your algorithm contribute to solving the research question?\\n3. Why do you compare with SAC, TD3, and TQC? They are all continuous actions. Shouldn't you compare with discrete RL as well? i.e., PPO with discrete policy or DQN? And what about algorithms that discretize the action space autonomously?\\n4. Can you provide more details on how SAC and your algorithm are compared in the Cheese-Trap environment (and the results)?\\n5. Can you clarify if the shaded areas in Figure 6 are STDs or confidence intervals?\\n6. What are the confidence intervals of Table 2?\\n7. Can you explain figure 7? \\n8. I am unsure whether *action discretization* or *multimodality* the focus of your work. And why *distributional RL* is relevant?\\n9. Please include a table of hyperparameters in the appendix.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
EW6bNEqalF
Offline RL in Regular Decision Processes: Sample Efficiency via Language Metrics
[ "Ahana Deb", "Roberto Cipollone", "Anders Jonsson", "Alessandro Ronca", "Mohammad Sadegh Talebi" ]
This work studies offline Reinforcement Learning (RL) in a class of non-Markovian environments called Regular Decision Processes (RDPs). In RDPs, the unknown dependency of future observations and rewards from the past interactions can be captured by some hidden finite-state automaton. For this reason, many RDP algorithms first reconstruct this unknown dependency using automata learning techniques. In this paper, we consider episodic RDPs and show that it is possible to overcome the limitations of existing offline RL algorithms for RDPs via the introduction of two original techniques: a novel metric grounded in formal language theory and an approach based on Count-Min-Sketch (CMS). Owing to the novel language metric, our algorithm is proven to be more sample efficient than existing results, and in some problem instances admitting low complexity languages, the gain is showcased to be exponential in the episode length. The CMS-based approach removes the need for naïve counting and alleviates the memory requirements for long planning horizons. We derive Probably Approximately Correct (PAC) sample complexity bounds associated to each of these techniques, and validate the approach experimentally.
[ "Reinforcement Learning", "Non-Markov Decision Process", "Offline Reinforcement Learning", "Regular Decision Processes", "Sample Complexity", "Automata" ]
Accept (Poster)
https://openreview.net/pdf?id=EW6bNEqalF
https://openreview.net/forum?id=EW6bNEqalF
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zLUqCQqoT1", "yLSM7R0NOc", "tHZwKUpxCH", "iyhcPV1Pxp", "VJusLWSBpR", "U3GSDivVki", "MIMZy8oFfc", "IlsEKYDYaG", "9T667E6uNv", "6xAJin7ReH", "5XXJgJAez1", "3DHQj9aj3L", "0lJTZKomdn", "0LjLcEizR4" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "decision", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1733140362310, 1732790991134, 1730721714318, 1732188528359, 1730756959224, 1733128965668, 1732187989684, 1732188894163, 1734931605552, 1737524095655, 1732918875534, 1730584828611, 1732189015123, 1729783855999 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10975/Reviewer_K6iR" ], [ "ICLR.cc/2025/Conference/Submission10975/Reviewer_YHdK" ], [ "ICLR.cc/2025/Conference/Submission10975/Reviewer_92xy" ], [ "ICLR.cc/2025/Conference/Submission10975/Authors" ], [ "ICLR.cc/2025/Conference/Submission10975/Reviewer_K6iR" ], [ "ICLR.cc/2025/Conference/Submission10975/Reviewer_92xy" ], [ "ICLR.cc/2025/Conference/Submission10975/Authors" ], [ "ICLR.cc/2025/Conference/Submission10975/Authors" ], [ "ICLR.cc/2025/Conference/Submission10975/Area_Chair_qtfR" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10975/Authors" ], [ "ICLR.cc/2025/Conference/Submission10975/Reviewer_9Ta3" ], [ "ICLR.cc/2025/Conference/Submission10975/Authors" ], [ "ICLR.cc/2025/Conference/Submission10975/Reviewer_YHdK" ] ], "structured_content_str": [ "{\"comment\": \"Thank you to the authors for their detailed response. It has helped me understand the work better and increased my confidence in my score.\\n\\nAs a side remark, what the authors mentioned for 'Continuous observations or actions\\\" can also be applied to learn a labeling function for reward machines. Such a discretisation of the observation space makes more sense for learning/representing the reward function as a reward machine (since the reward function is generally sparse without reward engineering). It makes less sense for learning/representing the whole MDP as an RDP, since that will have important implications on the size of the RDP and whether or not the transition dynamics are Markov.\"}", "{\"comment\": \"Thank you for your reply. I am not sure I made myself fully clear about the weakness I raised, since I don't understand how your reply addresses it. What I think misses from an otherwise good paper that I recommend acceptance for, is a clear description of how the RDP setting extends the MDP setting with respect to everyday(?) RL suitable tasks. Could you provide an idea about the applicability of the presented techniques by providing 1 or 2 examples of real world domains or applications where RDPs are particularly useful or necessary over the standard MDP setting?\"}", "{\"summary\": \"This paper explores offline reinforcement learning (RL) within Regular Decision Processes (RDPs), a subclass of non-Markovian environments. The authors propose improvements on existing methods through a novel language metric and the use of Count-Min Sketch (CMS) to enhance sample efficiency and reduce memory costs. They provide theoretical guarantees and some empirical results in specific domains, comparing their approach to the state-of-the-art methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper offers a theoretical framework for a new language metric and CMS integration in offline RL for RDPs. The analysis of PAC sample complexity for RDPs provides valuable insight into distinguishing state representations in non-Markovian settings.\", \"weaknesses\": \"1. The primary contribution, a novel language metric, builds heavily on established language hierarchy concepts and does not provide a groundbreaking shift in methodology. CMS is also a well-established technique, and its application here does not significantly differentiate the approach from prior state-merging algorithms. I think the author should highlight the difference and the technical contribution.\\n2. Experimental results are limited to a small set of benchmark domains that are relatively simple, and the benchmark algorithms used for comparison are limited. More famous RDP algorithms might be involved like Omega-RDP (Hahn et al., 2023) and Grid-world (Lenaers and Otterlo, 2021).\\n\\n---\\nHahn, E. M., Perez, M., Schewe, S., Somenzi, F., Trivedi, A., & Wojtczak, D. (2024, March). Omega-Regular Decision Processes. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 19, pp. 21125-21133).\\n\\nLenaers, N., & van Otterlo, M. (2022). Regular decision processes for grid worlds. In Artificial Intelligence and Machine Learning: 33rd Benelux Conference on Artificial Intelligence, BNAIC/Benelearn 2021, Esch-sur-Alzette, Luxembourg, November 10\\u201312, 2021, Revised Selected Papers 33 (pp. 218-238). Springer International Publishing.\", \"questions\": \"1. Given the reliance on existing language hierarchy theories, how does the proposed language metric meaningfully differ in terms of its theoretical impact on distinguishing states within RDPs?\\n2. Can the authors expand on any potential limitations of CMS in larger, more complex RDPs, especially concerning long planning horizons?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the detailed review and valuable feedback, which we will integrate into the paper.\\nWe discuss several points meant to address weaknesses and questions in the review. In particular, the first question is addressed mainly by the point \\u201cNovelty of the language metric\\u201d. The second question is addressed mainly by the point \\u201cCount-Min-Sketch\\u201d.\\n\\n**Novelty of the language metric** : Contrary to the weakness perceived by the reviewer, we claim that the proposed language hierarchy does not rely on existing language hierarchies. The only connection with existing classes of languages is the operator in Definition 1, which is only loosely inspired by classes of languages in the first level of the dot-depth hierarchy. Thus, our contribution with regards to the language metric and the language hierarchies is entirely novel. Furthermore, we believe that using a language-theoretic approach to prove sample complexity guarantees in a reinforcement learning setting is extremely innovative. \\n\\n**Count-Min-Sketch** : Though CMS is indeed used in other automaton learning algorithms (e.g. Baumgartner & Verwer[1]), other authors do not provide an extensive statistical analysis, and hence the choice of CMS parameters appears arbitrary. In contrast, our analysis sheds light on how to properly choose the CMS parameters, and as far as we know, we are the first to prove sample complexity guarantees for CMS in the automaton learning setting. As discussed in the answer to Reviewer K6iR, the difficulty of learning complex RDPs is not due to CMS, but rather due to properties of the RDP itself.\\n\\n**Omega-regular decision processes (ODPs)** : As far as we can tell, ODPs are a generalization of RDPs, and hence ODPs should be at least as hard to learn as RDPs. The paper on ODPs present several *classical* complexity results (e.g. EXPTIME-hardness), while one of our major contributions is to provide a *statistical* analysis of the RDP learning algorithm and its sample complexity.\\n\\n**Limitation of benchmarks** : As a primarily theoretical RL paper, the key goal is to devise provably sample-efficient algorithms that admit meaningful sample complexity bounds. The motivation behind providing numerical experiments here is: (1) to provide further insights into the presented results, thereby enriching discussion; and (2) to showcase that the presented approaches are not merely high-level theories and can indeed be implemented. We believe the domains used in our experiments, albeit rather small, render quite useful in achieving (1) and (2). Simple domains appear especially effective to spell out some key quantities (e.g., distinguishability) that arise in the context of RDP. We believe conducting experiments on larger domains fall into the scope of a follow-up work, where one may slightly depart from the theoretical framework via applying some techniques that are yet justifiable from a practical standpoint or may use some domain knowledge. Finally, we stress that releasing the code \\u2013 which we plan to do \\u2013 is a plus and paves the way to examining the presented approaches on larger domains. \\n\\n**References** : We plan to incorporate the references mentioned by the reviewer in the final version of the paper.\\n\\n[1] Baumgartner, Robert, and Sicco Verwer. \\\"Learning state machines from data streams: A generic strategy and an improved heuristic.\\\" International Conference on Grammatical Inference. PMLR, 2023.\"}", "{\"summary\": \"This paper considers the offline reinforcement learning (RL) problem when the offline trajectories data are non-Markov---precisely when they are generated by an underlying Regular Decision Process (RDP). To address this problem, they propose two novel algorithmic adaptations of ADACT-H (Cipollone et al., 2023) from prior work to learn the underlying RDP: one using the Count-Min Sketch (CMS) and another using a novel language metric ($L_X$) for efficient learning of episodic RDPs from offline data. The language metric is shown to be a generalization of previous ones like $L_1$ and $L_{\\\\infty}$, capturing desired properties of both to improve state distinguishability. They show that the proposed approaches significantly reduce sample and space complexity compared to prior methods. Finally, they conduct experiments across five domains from prior works to demonstrate the proposed methods' advantages in terms of runtime, number of states, and reward maximization.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The proposed language metric is novel and offers a unique approach to state distinguishability, unifying and extending traditional distance metrics (like $L_1$ and $L_{\\\\infty}$). This potentially sets a new standard for offline RDP learning.\", \"This is primarily a theoretical paper. The proposed approach has clearly stated assumptions and is supported by rigorous sample complexity analysis with proofs. Interestingly, through their theoretical analysis, the authors also uncover a mistake in one of theoretical results from the ADACT-H (Cipollone et al., 2023) paper. Such results are very much appreciated. Although the experiments are in fairly simple small environments with a single baseline, their inclusion is also appreciated as it helps better understand the applicability of the proposed approach.\", \"The paper is generally well-written, with somewhat clear explanations of the RDP framework and experimental setup. Although some sections (e.g., language metric details) could benefit from further simplification for broader accessibility. They also provide pseudocodes in the appendix for better clarity.\", \"This work contributes meaningfully to offline RL in non-Markov environments, where observations and rewards depend on past transitions.\"], \"weaknesses\": [\"The method seems to only be applicable to very small environments in practice, given that it attempts to learn the underlying RDP for both obsservation and reward transitions. This doesn't look like it can be applied to larger environments (such as high-dimensional observations and continuous crontrol ones), unlike prior works in the online setting like Toro Icarte et al., 2019.\", \"The approach assumes $\\\\pi^b$ maintains a non-zero minimum distinguishability, which may limit the algorithm's application in environments with varying policy behaviors or unobservable state spaces. Further discussion on generalizing this assumption would enhance the paper's robustness.\", \"The CMS-based approach suffers exponential complexity with the horizon \\\\(H\\\\), which could impact performance in long-horizon tasks, as evidenced in the Mini-hall domain experiment. Addressing this limitation or providing recommendations on CMS applicability range would be beneficial.\", \"The sample complexity bound relies on a parameter inversely related to the minimum occupancy of the optimal policy $d_m^*$, which could become infeasible in scenarios with low-reachability states. Including strategies or adjustments for low-occupancy environments could further strengthen the results.\", \"In general, the paper is very hard to follow for an RL researcher (at least it was for me). This is partially because of too many notations, and a mismatch of notations from different fields. For example, some come from the RL literature (like the value function $V(.)$) and others like $Q(.)$ which usually represents the action-value function in RL now represents the set of RDP states (instead of $S$ or even $U$). I think a lot of work needs to be done to simplify the notations in the background and improve the clarity of Section 4.\"], \"questions\": [\"Could the authors discuss possible modifications to their approach if $\\\\mu_0$ is close to zero in some states? What would the practical implications be in such cases?\", \"Can the proposed approach be applied to settings where the RDP structure is much larger, high dimensional, and even continuous? E.g. [1].\", \"[1] Allen, Cameron, et al. \\\"Mitigating Partial Observability in Sequential Decision Processes via the Lambda Discrepancy.\\\" arXiv preprint arXiv:2407.07333 (2024).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for the detailed reply, which solves most of my concerns. I raise the score to 6.\"}", "{\"comment\": \"We thank the reviewer for the detailed review and valuable feedback, which we will integrate into the paper. We discuss several points meant to address weaknesses and questions in the review. In particular, the first question is addressed mainly by the point \\u201cAssumption on distinguishability\\u201d and \\u201cComplexity of RDPs\\u201d. The second question is addressed mainly by the point \\u201cContinuous observations and actions\\u201d.\\n\\n**Comparison to the reward machine literature** : We remark that unlike Toro Icarte et al., we do not assume access to an existing label set and label function, and instead learn RDPs directly from a dataset of action-observation-reward episodes. We believe this problem to be significantly harder, and at the same time more realistic in case a label function is hard to define manually. Hence the two approaches are not directly comparable, and the RDPs we obtain are generally larger since we cannot exploit the compression afforded by the labels.\\n\\n**Scalability** : An interesting direction for future research would be to assume that the underlying MDP state is the cross-product of the observation and the automaton state, i.e. $S = \\\\mathcal{O} \\\\times \\\\mathcal{U}$ in our new notation. This is generally done in the reward machine literature, and makes the automaton part to be learned considerably more compact. A significant challenge is to achieve such an extension while still maintaining formal sample complexity guarantees.\\n\\n**Assumption on distinguishability** : Cipollone et al. prove a lower bound on the sample complexity which is inversely proportional to the distinguishability of the $L_1$-norm. Hence the difficulty of the RDP learning problem increases as the distinguishability approaches 0, and we do not believe that we can efficiently learn the exact RDP in this case, or easily generalize Assumption 1 (but perhaps efficient approximation algorithms are possible).\\n\\n**Minimum occupancy** : The approximation algorithm AdaCT-H-A of Cipollone et al. (which appears in Appendix A) partially alleviates the dependence on the minimum occupancy $d_m^*$. In fact, Cipollone et al. prove a sample complexity bound which excludes $d_m^*$ but in our corrected proof it appears as an additive term. In our view, the most promising direction for removing the dependence on $d_m^*$ (and on the concentrability $C^*$) is to develop sample-efficient *online* algorithms, which is something that we plan to investigate in future work.\\n\\n**Complexity of RDPs** : Several problem-specific parameters that appear in the sample complexity bound, such as $C^*$, $d_m^*$ and $\\\\mu_0$, strongly depend on the complexity of the RDP. Concretely, if the RDP is reasonably small and has no low-probability branches, then we can control the magnitude of $C^*$ and $d_m^*$. As we show in the paper, we can also often control the magnitude of $\\\\mu_0$ using our novel language metric. Hence the novel language metric should allow us to efficiently learn such well-behaved RDPs, even if the action and observation spaces are large. Since the lower bound of Cipollone et al. includes both $C^*$ and $\\\\mu_0$, learning exact RDPs becomes significantly harder as the complexity of the RDP increases, no matter which algorithm is used.\\n\\n**Continuous observations or actions** : Since we do not assume access to an existing label set, the only possibility we foresee is to learn a discrete latent representation (\\\"labels\\\") from experience, perhaps using unsupervised learning or another optimization objective, and use such latent variables as transition labels in the learned RDP.\\n\\n**Count-Min-Sketch** : Note that the exponential complexity in the horizon H is due to the distinguishability of the $L_\\\\infty$-norm, which is problem-dependent. Hence the exponential complexity is not due to CMS (which always improves the memory complexity), but instead due to the problem being solved and the metric used. This is precisely the limitation addressed by the language metric, which can unfortunately not be combined with CMS while maintaining strong sample complexity guarantees.\\n\\n**Notation** : We have changed our notation for the set of RDP states from $\\\\mathcal{Q}$ to $\\\\mathcal{U}$ according to the suggestions of the reviewer.\"}", "{\"comment\": \"We thank the reviewer for the detailed review and valuable feedback, which we will integrate into the paper. We discuss several points meant to address weaknesses and questions in the review. In particular, the question is addressed by points \\u201cEpisode traces\\u201d and \\u201cAdaCT-H\\u201d.\\n\\n**Episode traces** : We use the terms \\\"episode\\\", \\\"trace\\\" and \\\"suffix\\\" interchangeably in the paper, and all refer to sequences of action-reward-observation triplets introduced at the beginning of Section 2.3. We have clarified the use of terminology in the paper.\\n\\n**Outline of Theorem 1** : In the T-maze introduced in Example 3, while traversing the corridor each observation is random ($101$ or $111$). Hence in a corridor of length $H-1$, there are $2^{H-1}$ possible observation sequences, all equally likely. At the end of the corridor, the random reward depends on the first observation ($110$ or $011$). Hence the $L_\\\\infty$ norm has to compare the difference in probability of traces of length $H$, each of which has probability $1/2^H$. The distinguishability is equal to the maximum difference in probability, which can be at most $1/2^H$ in case the distribution on final rewards is different. On the other hand, the language metric only compares whether or not a certain final reward occurs, and each reward has probability $1/2$, so the distinguishability is $1/2$ in this case.\\n\\n**AdaCT-H** : The algorithm is first introduced in Section 3, but we have added a reference to Appendix A when AdaCT-H is again mentioned in Section 4.2.\"}", "{\"metareview\": \"This work studies offline RL in a special class of non-Markovian environments --- RDPs, where hidden finite-state automata capture dependencies between past interactions and future outcomes. It introduces two novel techniques: a formal language theory-based metric, which improves sample efficiency, and a Count-Min-Sketch approach that reduces memory requirements for long planning horizons. The authors also provide PAC sample complexity bounds and experimental validation to support their approach. The reviewers unanimously agree that this paper makes an interesting and valuable contribution to the field. As such, they all recommend accepting the paper.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised questions regarding the novelty of the work and certain technical details, which were thoroughly addressed in the rebuttal. With all reviewers providing positive scores, I recommend acceptance.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"We thank the reviewer for clarifying the question. Below we provide an additional response.\\n\\nAt a high level, RDPs are useful in all domains in which in order to act optimally the agent must rely on its history of interaction with the environment. This is in contrast with MDPs, which assume that the future is conditionally independent from the past given the current state.\\nMany examples can be found in the literature. Gabaldon [1] presents an environment where a robot is working in a biological research facility with various zones having different safety-levels.The robot is considered to be contaminated if it has touched an hazardous material, and the effect of the action *touch* on a new material will depend on its history of interaction in the environment (whether or not the robot has visited a disinfection station in-between). Or the robot cannot *open* the entrance of a particular lab, if the temperature of that lab exceeded a particular temperature since its last entry in the lab.\\n\\nNi et al.[2] present many of these environments with temporal dependencies, for example,\\n* Goal Navigation [3] environment where the agent needs to memorize crucial information throughout the episode in order to optimally reach the goal state.\\n* PsychLab [3] environments simulating psychology laboratories like Arbitrary Visuomotor Mapping where objects are presented in a series to an agent, and each object is associated with a look-direction. If the agent looks in the associated direction the next time it sees a particular object in the episode, it receives a positive reward.\\n* Spot the Difference [3] environment, where the agent has to move between two rooms, with a \\u201cdelay\\u201d corridor in between and correctly identify the difference in the two rooms. \\n* Memory Maze [4], where the agent is placed in a maze with $K$ colored balls, and the agent must accumulate the balls in a specific order (which is randomly set for each episode). To act optimally here, the agent needs to memorize the position of objects, wall layouts etc.\\n\\nAlthough in real life similar tasks will have a much larger observation space, whenever the underlying transition and reward functions are *regular*, they can be modelled as an RDP that can be learned using the methods we propose. Such problems could be alternatively modelled as POMDPs, but the class of RDPs is a strict subset of the class of POMDPs [5], so RDP learning methods have the potential to be significantly more efficient than POMDP learning methods.\\n\\nIn our paper we focus on the T-maze as a running example because it exemplifies the essence of environments where the history must be taken into account. There, in order to act optimally, the agent must remember the goal location which is disclosed at the beginning of an episode. The environment is purposefully simple, but such a simple dependence on the history can appear in real-world domains that are arbitrarily complex. \\n\\nFor instance, it can appear in robotics applications. Say a robot must navigate an arbitrarily large grid, it starts at the bottom left corner, and it is immediately told whether the goal is in the top right corner or in the bottom right corner. The grid itself can be arbitrarily complex, e.g., with obstacles to avoid. Still, in order to act optimally, the robot must rely on the information present in the history\\u2014i.e., the initial piece of information disclosing the location of the goal. \\nAlso in the domains we consider in the experiments it is key to rely on the history of past events, and they are relevant for applications in robotics.\\n\\n[1] Gabaldon, A. (2011). Non-markovian control in the situation calculus. Artificial Intelligence, 175(1), 25-48.\\n\\n[2] Ni, Tianwei & Ma, Michel & Eysenbach, Benjamin & Bacon, Pierre-Luc. (2023). When Do Transformers Shine in RL? Decoupling Memory from Credit Assignment. 10.48550/arXiv.2307.03864. \\n\\n[3] Fortunato, M., Tan, M., Faulkner, R., Hansen, S.S., Badia, A.P., Buttimore, G., Deck, C., Leibo, J.Z., & Blundell, C. (2019). Generalization of Reinforcement Learners with Working and Episodic Memory. ArXiv, abs/1910.13406.\\n\\n[4] Pa\\u0161ukonis, J., Lillicrap, T.P., & Hafner, D. (2022). Evaluating Long-Term Memory in 3D Mazes. ArXiv, abs/2210.13383.\\n\\n[5] Brafman, r. & De Giacomo, G. (2024). Regular decision processes. Artificial Intelligence 331, 104113.\"}", "{\"summary\": \"The authors present a new metric $L_\\\\chi$ regular decision processes (RDP) based on formal languages. In theorem 1, this metric is shown to be an improvement on previous metrics $L_q^p$ by outlining a family of RDPs for which the $L_q^p$-distinguishability decays as time horizon increases while the $L_\\\\chi$-distinguishability is constant. Example 4 is particularly useful in illustrating this phenomenon where a T shaped grid world with an increasing corridor length is causing a decay of distinguishability while it is constant in the second case. They then compare the distinguishability of these metrics for the policies produced by the algorithm A DACT-H.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Well written. The example of the T shaped world help fix the cause of the deficiency of the $L_q^p$ metric and how the $L_\\\\chi$ fixes the problem.\", \"weaknesses\": \"The proof of theorem 1 looks correct but I did not have time to check all details. A quick outline of how it works would be welcome.\", \"questions\": \"I would like to see a definition of \\\"episode trace\\\". I think I understand the meaning from the context but it would be best to clearly write it down somewhere.\\nThe ADACT algorithm is first mentioned on line 434 but no definition is given before that. I later found the definition in the appendix but it would be good to have a link to that definition when it is first mentioned.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the detailed review and valuable feedback, which we will integrate into the paper. We discuss several points meant to address weaknesses and questions in the review.\\n\\n**Example application domains** : As mentioned in the answer to other reviewers, the complexity of the RDP directly impacts the sample efficiency of RDP learning. In turn, since the RDP states are sufficient to predict the future, the difficulty of learning the RDP is directly linked to the difficulty of translating a generic episodic decision process into an MDP. We believe that the language metric should make it possible to learn simple RDPs independently of the number of observations and actions. This is possible precisely when the amount of information to remember about the past is relatively limited.\\n\\n**Non-trivial environment** : We strongly believe that the successful application of reinforcement learning to non-trivial environments was preceded by many important theoretical results. An important objective of the present paper is to illustrate the benefits of the proposed language metric, and we believe that the experiments supplement the theory by showing its superior performance in several domains. Of course this is only a first step towards handling larger domains.\"}", "{\"summary\": \"The paper offers approaches for offline RL in Regular Decision Processes. It focusses on improving sample efficiency for the state merging part of existing algorithms. One contribution is a technique that employs a language metric for state comparisons. The metric is somewhat akin to the principles behind e.g. graph kernels based on walks (see Gaertner, Flach, Wrobel, 2003) as it compares the probability distributions of the set of traces that can be generated by starting in two given states (nodes) and walking around in the RDP (on the graph). The paper presents an efficient way to find relevant (i.e. state distinguishing) patterns in historic traces and a memory saving approach for storing probability distributions over large histories. The paper includes a theoretical analysis and a (brief) experimental study.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The contributions are well placed in the context of directly related work. Although some more distant related work might have been missed, such as the graph kernel work mentioned in the summary, this represents only a minor point. To the best of my knowledge, the paper contains sufficient novelty to represent a good contribution, and it is nice to see the domain of tractable RL algorithms extended to settings beyond MDPs.\\nThe work and analysis presented are substantial. I will honestly admit I did not rigorously go through all the mathematics of the paper and the appendices. However, the parts I did go through illustrated the care for detail required.\\nThe paper is very well written too. There is a lot of material to get through and the organisation of the paper is well done and the focus is good. This may also explain the limited experimental support supplied in the paper. The authors clearly decided to focus on the theoretical guarantees they could provide and supplied a (in their own words) numerical experimental evaluation.\", \"weaknesses\": \"Being unfamiliar with the term Regular Decision Process, I had to dig up the 2024 paper (there is also a 2019 one, which I missed) in the hope of finding more about the type of domains they represent. The authors do define the RDP setting, but the paper does not succeed in relating the usefulness of the setting, i.e. it remains unclear how widely applicable the presented techniques are. It would have been nice to see some less theoretical application domains that fit the setting, are non-trivial to translate/transform into MDPs, yet present an interesting RL challenge. It is very hard for the reader to come up with one of these themselves, or at least it is in my experience while reading the paper.\", \"questions\": \"I understand it is always easy to ask for more, but your paper would substantially benefit from a non-toy environment, both mentioned as a fit for the learning setting you are tackling, but of course also used as an illustrative experiment to show what your contributions make possible that wasn't before. RL research's biggest boost came from showing solutions of non-trivial environments. I'm afraid that your good work might be lost without such a demonstrator.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
EW62GvCzP9
Truthfulness Without Supervision: Model Evaluation Using Peer Prediction
[ "Tianyi Qiu", "Micah Carroll", "Cameron Allen" ]
Current evaluation methods for language models rely on supervision, but trusted supervision for difficult tasks is often unavailable, especially for superhuman models. In these cases, models have been demonstrated to exploit evaluation schemes built on such imperfect supervision, leading to deceptive evaluation results. However, underutilized in the context of model evaluation, a wealth of mechanism design research focuses on game-theoretic *incentive compatibility* - eliciting honest and informative answers without trusted supervision. Drawing from this literature, we introduce the peer prediction method for model evaluation. It tells apart honest and informative answers from deceptive and uninformative ones, using a metric based on mutual predictability and without requiring ground truth labels. We demonstrate the method's effectiveness and resistance to deception, with both theoretical guarantees and comprehensive empirical validation on up to 405B-parameter models. In contrast to LLM-as-a-Judge which requires strong and trusted judges, we discover an inverse scaling property in peer prediction, where, surprisingly, resistance to deception is *strengthened* as the capability gap between the jury and participants *widens*, enabling reliable evaluation of strong models without trusted supervision. In particular, LLM-as-a-Judge evaluations become worse than random guesses when facing deceptive models 5-20$\times$ its size, while peer prediction thrives when such gaps are large, including in cases with over 100$\times$ size difference. Looking forward, we view this work as a step towards game-theoretic resistance to model deception in alignment and evaluation.
[ "Language Model Evaluation", "AI Alignment", "AI Truthfulness and Deception", "Large Language Models" ]
Reject
https://openreview.net/pdf?id=EW62GvCzP9
https://openreview.net/forum?id=EW62GvCzP9
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vrApOPzfhl", "sOBmq1cidE", "oew6ZIEtu9", "mpHlBqS7cO", "j8GKEDY2ZU", "haLSuJKHYT", "hPHws5b37o", "bv48hzsMjb", "ZSpxEgIK1Z", "YkCjfZ8bXe", "SqR8WTTwBV", "RM2c7rKnv0", "Plh4q1zJpn", "PJ5m36nmI1", "NsMiBu3d6n", "NFygs8bQa5", "N9iJZLrnHw", "LKa1Eymxr8", "KPL58bDrs5", "DylSDDJeFc", "CpsFk0IZBO", "9SWdLLVBvX", "6RbIAn5dQE", "5LaVRul8kP", "1zxwWJU7xV" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1732781361994, 1729357035786, 1732772243590, 1732421691601, 1733081063994, 1730739135032, 1732967811741, 1733299173017, 1737523623960, 1732777297238, 1732448766625, 1732421793210, 1732557011939, 1733125728951, 1732483556710, 1732421773189, 1732477736316, 1730680371414, 1732557069860, 1732607098201, 1733040950776, 1732421615942, 1732868249333, 1734916421705, 1732448839202 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4190/Reviewer_73k9" ], [ "ICLR.cc/2025/Conference/Submission4190/Reviewer_73k9" ], [ "ICLR.cc/2025/Conference/Submission4190/Reviewer_73k9" ], [ "ICLR.cc/2025/Conference/Submission4190/Authors" ], [ "ICLR.cc/2025/Conference/Submission4190/Authors" ], [ "ICLR.cc/2025/Conference/Submission4190/Reviewer_6HAK" ], [ "ICLR.cc/2025/Conference/Submission4190/Reviewer_73k9" ], [ "ICLR.cc/2025/Conference/Submission4190/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4190/Authors" ], [ "ICLR.cc/2025/Conference/Submission4190/Authors" ], [ "ICLR.cc/2025/Conference/Submission4190/Authors" ], [ "ICLR.cc/2025/Conference/Submission4190/Authors" ], [ "ICLR.cc/2025/Conference/Submission4190/Reviewer_73k9" ], [ "ICLR.cc/2025/Conference/Submission4190/Reviewer_73k9" ], [ "ICLR.cc/2025/Conference/Submission4190/Authors" ], [ "ICLR.cc/2025/Conference/Submission4190/Reviewer_73k9" ], [ "ICLR.cc/2025/Conference/Submission4190/Reviewer_RBqK" ], [ "ICLR.cc/2025/Conference/Submission4190/Authors" ], [ "ICLR.cc/2025/Conference/Submission4190/Authors" ], [ "ICLR.cc/2025/Conference/Submission4190/Authors" ], [ "ICLR.cc/2025/Conference/Submission4190/Authors" ], [ "ICLR.cc/2025/Conference/Submission4190/Authors" ], [ "ICLR.cc/2025/Conference/Submission4190/Area_Chair_ayJe" ], [ "ICLR.cc/2025/Conference/Submission4190/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear authors,\\n\\nAfter spending some additional time reading the rebuttal and the paper, I still cannot help but find the inverse scaling results extremely implausible. If I understand correctly, they seem to indicate that the proposed method seems to *consistently perform better when worse jury models are used*. I find it difficult to believe that this result would generalize to realistic settings and also fail to see, how the result would be explained by the theory. \\n\\nThis makes me very reluctant to raise my score, despite the evident effort put into the rebuttal. \\n\\nCould you provide any intuition/explanation, on why you believe the inverse scaling results are more than an artifact of the specific evaluation method, and would generalize?\\n\\nI also had a look into the posted repository to get a better understanding, but found things difficult to parse. Would it maybe be possible for you to share a csv file that contains the participants response and juror scores for your main experiment on MMLU (or a representative subset, in case that is too much data) in an easily accesible way?\"}", "{\"summary\": \"This paper proposes a peer prediction mechanism for evaluating LLMs: A Model A's answers are scored by how much they help a \\\"juror\\\" J predict other model B's answers. The authors prove that assuming a joint prior over models' \\\"real\\\" answers (or a known distribution over participant's priors with some regularity conditions), reporting the \\\"real\\\" answers is a bayesian nash equilbrium. Experiments are conducted using a variety of different LLMs of different sizes, and on multiple different LLM benchmarks. LLMs misrepresenting their \\\"real\\\" answer are modeled by a prompt that tells models to provide convincing false answers.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Exploring methods for using LLMs to aid in the evaluation of LLMs is a very timely topic\", \"The experiments cover a wide range of different tasks, as well as different models.\", \"Judging from my limited expertise in peer prediction, Theorem 2 could be of independent theoretical interest (if it is indeed novel as claimed in the appendix).\"], \"weaknesses\": [\"The proposed approach seems to suffer from a lot of issues that are common for peer prediction mechanisms:\", \"The assumption of shared priors does not appear to be very realistic, and I am not convinced the generalization from Theorem 2 helps much, as it appears to require full knowledge of the distribution over priors.\", \"Honesty seems unlikely to be the only bayesian nash equilibrium. Unlike for other peer prediction mechanisms, collusion might not even be necessary for deviating from honesty in this case: If I understand correctly, a witness would obtain perfect reward if it encoded the defendant's answers in its own (in a way the juror is able to decode). However, an answer that encodes the correct answer can be very different from the correct answer in many cases.\", \"If I understand the experimental setup correctly, it appears to provide an unfair advantage of the proposed method over LLM-as-a-Judge, as despite the weak judge, the peer prediction mechanism has access to strong peer models. If such models are available, it seems misleading to have LLM-as-a-Judge use the weak judge model rather than one of the stronger peer models.\", \"Some additional information on why LLM-as-a-judge was implemented the way it is (for example, no few-shot prompting) would also be helpful to better assess, whether the comparison is fair or an overly weak baseline [1] was chosen.\", \"I am worried that the inverse scaling experiment might be confounded by a similar issue: The improvements could be caused by the increasing capability of the peers rather than the increasing capability of the evaluated model. An ablation in which the peer models are fixed to the juror model's size would be useful here.\", \"The experiments on incentive compatibility seem to only employ a single, non-adaptive deceptive \\\"attack\\\". This is insufficient to establish that the mechanism works in practice [2], especially for strong, potentially superhuman, models.\", \"Considering different approaches to deception and analyzing them in more detail would also help to ensure that the observed inverse scaling is not just an artifact of the specific deceptive attack: One potential explantation for the inverse scaling would be that small models rarely understand the instruction to be deceptive and thus simply behave very similar to the honest model, making them impossible to detect.\", \"The paper at times makes very strong claims such as \\\"the method [is] applicable to superhuman models\\\" and \\\"enabling reliable evaluation of superhuman models without trusted supervision.\\\" that seem too strong, even disregarding the potential issues with the experiments.\"], \"nitpicks\": [\"The naming of the entities involved in the game is a bit confusing: Why is the Witness receiving the punishment rather than the defendant?\", \"The paragraph \\\"Scaling Properties with Jury Population Size\\\" seems to lack some details, such as what Regression the R^2 refers to.\", \"The font used for probability distributions is hard to read (D alone would be fine, but P,D and Q(?) all look pretty similar\", \"It might be useful to provide a few basic details on how the juror model is implenented in the main text rather than only in the appendix.\", \"I would find the statement of theorem 1 a bit more clear if the last condition on the game's scores was stated in the beggining rather than as an if-clause in the very end.\", \"Figure 1 would be easier to read with subcaptions indicating the differences between subfigures. Also, the y-axis seems to show the negative of punishment for deception (otherwise, higher rather than lower would be better)\", \"[1] Carlini, Nicholas, et al. \\\"On evaluating adversarial robustness.\\\" arXiv preprint arXiv:1902.06705 (2019).Carlini, Nicholas, et al. \\\"On evaluating adversarial robustness.\\\" arXiv preprint arXiv:1902.06705 (2019).\", \"[2] Lipton, Zachary C., and Jacob Steinhardt. \\\"Troubling Trends in Machine Learning Scholarship: Some ML papers suffer from flaws that could mislead the public and stymie future research.\\\" Queue 17.1 (2019): 45-77.\"], \"questions\": [\"Do the results shown in figure 1 look qualitatively similar when only multiple choice or only open-ended tasks are considered?\", \"The numbers in the experiment from section 3 are difficult to interpret without more context: For many tasks, the gap between the 70B model and the 405B one seem small, and the 70B model even performs better in about 20% of tasks. How do these gaps relate to gaps in the models' performance on these tasks according to the ground truth (as well as according to model-as-a-judge)?\", \"Could you add an ablation using different models rather than instances of the same model as participants? That would be useful to rule out the following explanation for inverse scaling:\", \"With its very limited training data (3 few-shot examples), the best the juror can do for honest witnesses is to predict that the defendant's answer is similar to the witness's. Meanwhile for dishonest models, it can only guess. Now assume that witness and defendant are either the same model, or a honest and a deceptive version of the same model. Then, it will be very easy to distinguish the honest and lying witness if the honest model's answers are deterministic, but harder if they have very high variance. The observed inverse scaling would now follow from the plausible assumption that larger models produce more similar answers with repeated sampling than smaller models do. However, this kind of inverse scaling would cease, as soon as models are large enough to be generally self-consistent.\", \"It looks like the surplus actually increases in $\\\\alpha$. Why are no values of $\\\\alpha$ larger than one included?\", \"What are the key elements of the proofs that distinguish them from the ones in in Schoenebeck & Yu (2023)?\", \"Regarding inverse scaling, it is currently difficult to read how the method scales with juror quality. Could you add a plot with the judge size rather than the evaluated model's size varying on the x-axis?\", \"In theorem 2, why is there a probability 1-$\\\\delta$ if the equilibrium is supposed to be ex-ante? Does that mean that there are (rare) agents that have an incentive to deviate ex-ante?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the response.\\n\\n> The most accurate description would be peer prediction evaluates truthfulness, i.e. the extent to which the evaluated answer matches the truth. \\n\\nI find this statement confusing. It seems to me like the method can clearly not reliably evaluate truthfulness, unless the other models involved in the evaluation know the truth?\"}", "{\"title\": \"Rebuttal [2/3]\", \"comment\": \"> Considering different approaches to deception and analyzing them in more detail would also help to ensure that the observed inverse scaling is not just an artifact of the specific deceptive attack: One potential explantation for the inverse scaling would be that small models rarely understand the instruction to be deceptive and thus simply behave very similar to the honest model, making them impossible to detect.\\n\\nWe agree with your assessment that we need to consider diverse modes of deception, especially those that may likely arise in real-world failure cases of AI control [1,2,3]. We have thus tested peer prediction on MisleadLM [1], an LM which exhibits deceptive behaviors as a result of canonical RLHF training. \\n\\nThe results, presented in **Figure 6**, uphold the inverse scaling property, even though both peer prediction and LLM-as-a-Judge find it harder to distinguish deceptive answers compared to the previous experiments, possibly due to increased convincingness of deceptive answers.\\n\\n> The assumption of shared priors does not appear to be very realistic, and I am not convinced the generalization from Theorem 2 helps much, as it appears to require full knowledge of the distribution over priors.\\n\\nTheorem 2 can be directly extended to the case where each participant $i$ has their own \\\"prior over priors\\\" $\\\\mathcal{D}_i$. To show this fact, we need to verify that the honest strategy profile is indeed a Bayesian Nash equilibrium (BNE henceforth) under this \\\"private $\\\\mathcal{D}_i$\\\" setting. To do that, observe that *for any participant $i$*, the property that \\\"honest reporting is its *ex-ante* optimal strategy given all others do so\\\" only depends on $i$'s personal belief $\\\\mathcal D_i$ about others' beliefs, and not what the others really believe.\\n\\nIt doesn't matter whether $\\\\mathcal D_i$ is modelled as a distribution over $[0,1]^{n|\\\\mathcal A|}$ (i.e. distribution over priors) or over $\\\\mathcal P\\\\left([0,1]^{n|\\\\mathcal A|}\\\\right)$ (i.e. distribution over distributions over priors), since the linearity of expected payoff means that BNE in the former case is preserved in the latter case, and $\\\\mathcal P(\\\\cdot)$ can simply be removed by linearity.\\n\\nNote that at this point, we are basically modeling hierarchical beliefs, which, in theory, would make the type-based formalism of [epistemic game theory](https://faculty.wcas.northwestern.edu/msi661/EpistemicGameTheory-131120.pdf) handy. However, we decided that introducing type notations would make things needlessly complicated without touching the problem's core, and so avoided hierarchical beliefs (those with >2 levels) in the theorem statement.\\n\\nWe have made these considerations clear in Remark 1 of the revised manuscript.\\n\\n> Honesty seems unlikely to be the only bayesian nash equilibrium. [...] If I understand correctly, a witness would obtain perfect reward if it encoded the defendant's answers in its own.\\n\\nThe witness does not see the defendant's answer, so the \\\"perfect reward by encoding\\\" claim is only true when the witness $w$ has *all* the private information that the defendant $d$ has (otherwise $w$ cannot show the jury those information in $d$'s answer that $w$ doesn't have), which is rare. And even when it does, it ($w$) could just choose to report all the information it has at face value without doing any encoding work (exactly the honest behavior we desire), and still get a perfect reward.\\n\\nIn general, while there can indeed be other BNE, honesty is (among) the highest-paying BNE, as shown in the \\\"achieves maximum ex-ante payoff among all equilibria for any agent\\\" part of Theorem 1. \\n\\nAI systems lie usually becauses lying gets them strictly more reward (as is the case in, for example, experimental works demonstrating deception [1,2,3]). The fact that honesty is among the highest-paying BNE (by Theorem 1) ensures that no other equilibrium can reward any agent more than honesty does, thus removing the incentive to lie, even though not banning deception in the strict sense.\\n\\n> [...] it appears to provide an unfair advantage of the proposed method over LLM-as-a-Judge, as despite the weak judge, the peer prediction mechanism has access to strong peer models.\\n\\nWe agree that this is an advantage, but it's a well-earned one. Our working assumption is that *smaller/weaker models are trusted, while larger/stronger models are not* - a standard assumption in scalable oversight [4,5,6]. The only feedback information LLM-as-a-Judge uses is that from the *trusted* judge (must be trusted because the judge's evaluation is used at face value without verification), while peer prediction can utilize the capabilities of *untrusted* models by having them as fellow participants - something LLM-as-a-Judge is unable to accommodate.\"}", "{\"title\": \"We Agree (Response to Nov 29 Followup)\", \"comment\": \"Thank you for your followup response!\\n\\n> Would you mind adding what you respond to \\u201c\\u2018distinguish better models from worse ones\\u2019 is not a good evaluation metric\\u2026\\u201d on your conclusion-future direction part?\\n\\nYes, we agree this is important, and commit to including that explanation in our camera-ready version.\\n\\nThe rebuttal-stage deadline for uploading revised manuscript has unfortunately passed (it seems that the response wasn't visible to us until the 29th), and the next time we are able to do this will be in the camera-ready version. Given that this comment is fully public, we are making this a credible commitment and will make sure to follow through.\\n\\n> It is fair to try some less powerful models to propose and verify your method, but you have to make it clear and well-explained in your paper\\n\\nThis makes sense. In our current manuscript on OpenReview, we have already removed the word \\\"superhuman\\\" from all substantial claims we make about peer prediction, and instead only used it when explaining the background and motivation. (For instance, it now occurs only once in the abstract, when introducing the motivation.)\\n\\nIn addition to this, as per your suggestions, we commit to doing the following in our camera ready version:\\n- Include the following sentence in the abstract: *\\\"While peer prediction hasn't been tested on strictly superhuman models, its ability to reliably supervise models much stronger than the jury hints at its potential applicability to superhuman models.\\\"*\\n- Add a paragraph of text in introduction, next to the \\\"our contributions\\\" paragraphs, restating and explaining the *\\\"While peer prediction hasn't been tested on strictly superhuman models [...]\\\"* sentence above.\\n- In section 3, after line 240, add a paragraph explaining our setting, namely (1) that we are using \\\"evaluating stronger models with much weaker juries\\\" as a proxy for our \\\"evaluating frontier-pushing models\\\" motivation, and (2) that we are motivated by frontier-pushing models due to the lack of reliable means to evaluate them.\\n\\nFinally, a quick explanation on why we did the experiments we did:\\n- During our study, Llama3.1-405B came out, with [benchmark results](https://ai.meta.com/blog/meta-llama-3-1/) comparable to GPT-4o and Claude 3.5 Sonnet at the time. We thus considered it to be at least *at* the frontier of capabilities (if not actively pushing it), and included it into our experiments. We acknowledge though, it's not necessarily still at the current frontier now, and due to budget constraints we only tested 405B in our informativeness experiments.\\n- Since the frontier is always shifting, we think the unchanging core of our setting is \\\"evaluating a model without using any other supervisor with comparable capabilities\\\", rather than whether any model is at the frontier at any specific time. However, we do strongly agree we need to make this setting very clear, and we aim to do so in the changes we committed to above.\\n\\n> I would also suggest reorganizing the content related to Theorem 1, mainly the paragraph [...]\\n\\nWe agree. We commit to adding the following paragraph right after Theorem 1, before line 270:\\n\\n*While Theorem 1 focuses on an all-honest equilibrium, our later experiments show that when $50\\\\\\\\%$ (Figure 5,12) or sometimes even $75\\\\\\\\%$ (Figure 12) of the participants are deceptive, honest strategies are still favored over deceptive ones. This indicates that, in practice, peer prediction is effective in incentivizing honesty, even in the presence of deception by others.*\\n\\n---\\n\\nThank you again for the very helpful comments! They have been highly instrumental to improving our manuscript. We could also share our draft of the additional paragraphs we committed to adding, if you wish to see them, and if that's allowed by ICLR rules.\\n\\nPlease feel free to let us know of any additional questions.\"}", "{\"summary\": \"When discussing evaluation methods for language models, the strong reliance on supervision and the unavailability of reliable supervision on hard tasks, particularly in scalable oversight, lead to the exploration of \\u201cevaluation methods without reliable supervision\\u201d. Therefore, this paper proposes the \\u201cpeer prediction method\\u201d, which leverages \\u201cgame-theoretic incentive compatibility\\u201d from mechanism design literature, to perform resistance to deception without trusted supervision.\\n\\nThis paper first clarifies the importance of exploring evaluation methods, explains their inspiration for leveraging \\u201cgame-theoretic incentive compatibility\\u201d, and then highlights the merits of their \\u201cpeer prediction method\\u201d. The \\u201cpeer prediction method\\u201d applies several models as \\u201cparticipants\\u201d and some separate agents as \\u201cjurors\\u201d, then evaluates participants\\u2019 answers to held-out questions by assessing their ability to help jurors predict others' responses, using peer answers as targets instead of ground-truth labels. In the experiment section, the paper applied a dataset containing questions spanning across tremendous domains to test the effectiveness and resistance to deception, including a finding of \\u201cInverse Scaling Properties\\u201d and other ablation studies on scaling properties.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This paper proposes a novel method that does not need reliable supervision. It is resistant to deception and has a strong scaling performance.\\n\\nThis paper priorly takes \\u201cgame-theoretic incentive compatibility\\u201d into consideration and provides mathematical proof for their theorem\", \"weaknesses\": \"1. The scenario of exploring the topic of \\u201cevaluate models without supervision\\u201d is not well defined. According to this paper, \\u201clack of reliable supervision\\u201d occurs in \\u201cscalable oversight\\u201d, which is well explained. Therefore, it is reasonable to discuss this scenario but only limited to it. For other scenarios, \\u201caiming to ensure that they are safe, reliable, and beneficial\\u201d (Line 57) does not necessarily lead to the motivation to \\u201cevaluate models without supervision\\u201d. The claim of \\u201ctrusted supervision for difficult tasks is often unavailable\\u201d (Line 12) in the abstract does make sense but lacks sufficient and concrete examples (only \\u201cscalable oversight\\u201d is mentioned). Further elaboration and explanation of \\u201cunder what circumstances when trusted supervision is unavailable\\u201d should be provided.\\n\\n2. Some of the conclusions are based on strong assumptions. Take Line 270 to Line 277 as an example, it is reasonable to have the conclusion of \\u201cpeer prediction method is incentive compatible\\u201d, but the conclusion of \\u201cIn particular, models are incentivised to converge upon honest and informative policies, if either (I) they are trained on the peer prediction scores as reward signals, or (II) they perform inference-time reasoning to maximize the evaluation scores\\u201d is likely to lead based on a strong assumption that efficient benign answers are required. If tremendous malicious answers are proposed, according to the \\u201cincentive capability\\u201d, the models may be incentivized to converge upon deceptive results.\\n\\n3. The methodology strongly relies on the combination of several LMs. Considering the peer prediction method takes peer answers as targets instead of ground-truth labels, the results of the models are interactional. That means if one of the models is changed, the performance of other models will comparatively be changed. The provided experiments lack examples of different combinations of LMs as participants.\\n\\n4. This method will be really resource-consuming when considering superhuman models. Plus,\\u201d distinguish better models from worse ones\\u201d is not a good evaluation metric to me as it is a relative result instead of an absolute one, which means it will have more limitations. For example, you need to find appropriate models for comparison when testing\", \"questions\": \"1. Could you please provide further explanation for what is mentioned in weakness 1 and weakness 2?\\n\\n2. According to weakness 3, are you available to provide more experiments considering different combinations of the participant's model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the response.\\n\\nI tried to open one of the gz files on two different operating systems but got an error message. \\n\\nThis explanation looks somewhat similar to some of the concerns I voiced in my initial review regarding the jury model essentially \\\"repeating\\\" the witness answer. But in that case, it seems as if the impact of a dishonest defendant would conceptually be very similar to the impact of a dishonest judge in the llm-as-judge setting (with the main difference that a) dishonest predictions and dishonest judgments can look somewhat different and b) multiple judges are used in the peer prediction setup.). This reinforces my worry about the fair comparison to llm-as-judge in terms of model access. \\n\\nAfter all, while I appreciate the authors' engagment during the rebuttal, I have decided to keep my score.\\n\\nFor future versions, I strongly recommend to de-emphasize or further qualify the inverse scaling results, as I do think that they are misleading with respect to the practical scalability of the method (for which scaling in the participants' size would be more relevant). In addition, I do recommend using a more fair comparison to LLM as a judge, where both methods have access to the same other models. \\n\\nAs a side note, after going through the figures several times now: The x-axis on the figures is very confusing. Rather than showing the size gap (which leaves open which of the two models is scaled), it would be way more clear if it just stated \\\"jury model size\\\".\"}", "{\"title\": \"Final Remarks\", \"comment\": \"We really appreciate that you've got back to us!\\n\\n> I would not be surprised if you'd observe similar results with LLM-as-juge, depending on the specific implementation\\n\\nTo the best of our knowledge, there has been no attempt in the literature [1] to extend LLM-as-a-Judge to a multiple-judge setting, and it remains an open question how such a method is to be implemented. Therefore, while comparisons of peer prediction with this extended LLM-as-a-Judge could strengthen our position, we find it unreasonable to require that such comparisons against *not-yet-proposed* methods be made. \\n\\nAmong all the possible population-based evaluation methods, peer prediction is, to our knowledge, the only one with solid theoretical support, and thus we chose peer prediction as our focus (while comparing it with baselines already present in the literature).\\n\\n> This explanation looks somewhat similar to some of the concerns I voiced in my initial review regarding the jury model essentially \\\"repeating\\\" the witness answer.\\n\\nAs can be seen in the data files that we share, informant and defendant answers are almost always substantially and semantically different, so it seems unclear to us how the \\\"repeating\\\" explanation could be applied. \\n\\nOne could argue that the jury may simply be evaluating semantic similarity between answers, but we believe this explanation has already been ruled out by experiment results featured in our *Rebuttal [1/3]* (please search for \\\"Null Hypothesis\\\").\\n\\n---\\n\\nThank you again for your engagement, and for the feedback you've provided! We appreciate this exchange.\\n\\n\\n\\n[1] Gu, Jiawei, et al. \\\"A Survey on LLM-as-a-Judge.\\\" arXiv preprint arXiv:2411.15594 (2024).\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"> It seems to me like the method can clearly not reliably evaluate truthfulness, unless the other models involved in the evaluation know the truth?\\n\\nThe peer prediction metric, as a proxy measure for truthfulness, operates by assuming that (a majority of) other participants know the truth. (Theorem 1 in particular assumes everyone is Bayesian and thus can't be confidently wrong; Theorem 2 relaxes this to some extent by allowing prior disagreements.)\\n\\nIn practice though, truths tend to be more consistent than falsehoods (barring collusion), which give them asymmetric advantage over falsehoods. For instance, a correct mathematical calculation can be helpful for predicting another correct mathematical calculation, as the two share the same set of rules; on the other hand, a mistaken mathematical calculation is less helpful for predicting another mistaken mathematical calculation, as mistakes are made in random ways.\\n\\nIndeed, we see that in practice, peer prediction can measure truthfulness even when falsehood-tellers constitute \\u2265half the population (cf. Figure 5/6/12). Though of course, this doesn't always work, and still breaks down when falsehoods are sufficiently prevalent and the model capability gap is at the lower end of the inverse scaling property (cf. Figure 11).\"}", "{\"title\": \"Rebuttal [1/2]\", \"comment\": \"Hi Reviewer 6HAK - thank you for your feedback! We've found your comments to be highly instructive.\\n\\n## Summary of Changes\\n\\nIn our newly uploaded manuscript, most new content are concentrated in **page 14-17**. Namely,\\n- Results from a range of new experiments (**Figure 5,6,7,8,9** newly added), including:\\n - Scaling experiments on fully heterogeneous participants (**Figure 5**). Results suggest that peer prediction indeed checks for truthfulness as opposed to mere similarity.\\n - Scaling experiments on populations containing half or more deceptive participants (**Figure 5,10,11,12**; the last 3 already existed in the submitted manuscript). The results also suggest that peer prediction checks for truthfulness as opposed to mere similarity.\\n - Scaling experiments with realistic, RLHF-trained deceptive model (**Figure 6**), based upon [MisleadLM](https://huggingface.co/jiaxin-wen/MisleadLM-QA) [1]. Results suggest that peer prediction continues to function in such settings akin to real-world deployment.\\n - Correlating peer prediction scores with ground truth accuracy, at a domain level (**Figure 7**). Results confirm that peer prediction rewards accurate answers.\\n - Visualizations of score distributions (**Figure 5,6**), for intuition-building purposes.\\n - Scaling properties of peer prediction with jury population size, for a wider range of aggregation exponent $\\\\alpha$ (**Figure 8**). $\\\\alpha=-1$ is confirmed to be a sweet spot.\\n - Scaling properties of *counterfactual* deception resistance (**Figure 9**). Results confirm that under peer prediction, honesty is a dominant strategy for participants.\\n- Updated the main text as per presentation suggestions from reviewers; please see the detailed reply below for further explanations.\\n- [Anonymized repository](https://anonymous.4open.science/r/Peer-Prediction-LLM-Eval-EB1E/) for reproducibility.\\n\\n## Response to Questions and Suggestions\\n\\n> \\u201caiming to ensure that they are safe, reliable, and beneficial\\u201d (Line 57) does not necessarily lead to the motivation to \\u201cevaluate models without supervision\\u201d. \\n\\nIt's a great question - thanks for raising it! Here is our motivation. \\n\\nBefore we deploy strong (and possibly superhuman) models into the real world, we need to evaluate their safety and trustworthiness [1,2] to prevent harms, including, importantly, harms from deception and manipulation [3]. Indeed, such evaluation has been the official policy of [leading](https://www.anthropic.com/news/anthropics-responsible-scaling-policy) [AI](https://cdn.openai.com/openai-preparedness-framework-beta.pdf) [labs](https://deepmind.google/discover/blog/introducing-the-frontier-safety-framework/). \\n\\nHowever, when the model that we evaluate is itself pushing the frontier of AI capabilities (e.g. when OpenAI released GPT-4), there is, by definition, no stronger model to supervise it. Human evaluators are very often also deceived or misled by convincing models [3,4,5,6], and thus cannot be fully trusted to provide supervision. This fact that no trustworthy supervision exists when evaluating frontier models, is what motivates our work on supervision-free evaluation.\\n\\n> Further elaboration and explanation of \\u201cunder what circumstances when trusted supervision is unavailable\\u201d should be provided.\\n\\nAs argued in the paragraph above, trusted supervision is unavailable when **the model being evaluated is itself pushing the frontier of AI capabilities**. This is because the model is stronger than any other model that could supervise it (we demonstrated in Figure 1 that evaluating strong models with a weak LLM-as-a-Judge is prone to deception), and the model is often capable of exploiting human supervision as well [3,4,5,6].\\n\\n> \\u201cIn particular, models are incentivised to converge upon honest [...]\\u201d is likely to lead based on a strong assumption that efficient benign answers are required. If tremendous malicious answers are proposed, according to the \\u201cincentive capability\\u201d, the models may be incentivized to converge upon deceptive results.\\n\\nIndeed, incentive compatibility (Theorem 1 and Theorem 2) shows that *when other agents are honest*, the model is incentivized to be honest as well. It does not rule out the possibility of deception when other agents are deceptive, but *neither does it imply that deception is incentivized in such cases* - intuitively speaking, there is an asymmetry between the payoffs of honest and deceptive strategies when information-theoretic mechanisms (like peer prediction) are used [7], with honesty generally having an advantage. Underlying this intuition is the *Data Processing Inequality* in Appendix C.1.\\n\\nUltimately, it all comes down to experiments. Our experiment results show that when $50\\\\\\\\%$ (**Figure 5,12**) or sometimes even $75\\\\\\\\%$ (**Figure 12**) of the population are deceptive, honest strategies can still win against deceptive strategies. This indicates that peer prediction is effective in incentivizing honesty, even in the presence of deception by others.\"}", "{\"title\": \"References\", \"comment\": \"[1] Wen, Jiaxin, et al. \\\"Language models learn to mislead humans via rlhf.\\\" *arXiv preprint arXiv:2409.12822 (2024).*\\n\\n[2] Lang, Leon, et al. \\\"When Your AIs Deceive You: Challenges of Partial Observability in Reinforcement Learning from Human Feedback.\\\" *arXiv preprint arXiv:2402.17747 (2024).*\\n\\n[3] Williams, Marcus, et al. \\\"Targeted Manipulation and Deception Emerge when Optimizing LLMs for User Feedback.\\\" *arXiv preprint arXiv:2411.02306 (2024).*\\n\\n[4] Burns, Collin, et al. \\\"Weak-to-strong generalization: Eliciting strong capabilities with weak supervision.\\\" *arXiv preprint arXiv:2312.09390 (2023).*\\n\\n[5] Leike, Jan, et al. \\\"Scalable agent alignment via reward modeling: a research direction.\\\" *arXiv preprint arXiv:1811.07871 (2018).*\\n\\n[6] Khan, Akbir, et al. \\\"Debating with more persuasive llms leads to more truthful answers.\\\" *arXiv preprint arXiv:2402.06782 (2024).*\"}", "{\"title\": \"Rebuttal [1/2]\", \"comment\": [\"Thank you, Reviewer RBqK, for your helpful feedback!\", \"## Summary of Changes\", \"In our newly uploaded manuscript, most new content are concentrated in **page 14-17**. Namely,\", \"Results from a range of new experiments (**Figure 5,6,7,8,9** newly added), including:\", \"Scaling experiments on fully heterogeneous participants (**Figure 5**). Results suggest that peer prediction indeed checks for truthfulness as opposed to mere similarity.\", \"Scaling experiments on populations containing half or more deceptive participants (**Figure 5,10,11,12**; the last 3 already existed in the submitted manuscript). The results also suggest that peer prediction checks for truthfulness as opposed to mere similarity.\", \"Scaling experiments with realistic, RLHF-trained deceptive model (**Figure 6**), based upon [MisleadLM](https://huggingface.co/jiaxin-wen/MisleadLM-QA) [1]. Results suggest that peer prediction continues to function in such settings akin to real-world deployment.\", \"Correlating peer prediction scores with ground truth accuracy, at a domain level (**Figure 7**). Results confirm that peer prediction rewards accurate answers.\", \"Visualizations of score distributions (**Figure 5,6**), for intuition-building purposes.\", \"Scaling properties of peer prediction with jury population size, for a wider range of aggregation exponent $\\\\alpha$ (**Figure 8**). $\\\\alpha=-1$ is confirmed to be a sweet spot.\", \"Scaling properties of *counterfactual* deception resistance (**Figure 9**). Results confirm that under peer prediction, honesty is a dominant strategy for participants.\", \"Updated the main text according to presentation suggestions from reviewers; please see the detailed reply below for further explanations.\", \"[Anonymized repository](https://anonymous.4open.science/r/Peer-Prediction-LLM-Eval-EB1E/) for reproducibility.\", \"## Response to Questions and Suggestions\", \"> I had a hard time understanding Fig 3, can you expand on the details of each subfigure?\", \"Certainly.\"], \"take_the_top_left_subfigure_as_example\": \"the x-axis is a collection of problem domains (public relations, marketing, ...), and for each of these domains we have 3 ranges plotted - red, blue, and green. The red range represents the mean score that the 8B model received from peer prediction (Algorithm 1) when answering test questions from this domain; blue for 70B; and green for 405B.\\n\\nThe top-left subfigure contains 10 domains, and the remaining 7 subfigures each similarly contains 10-11 domains, so that's a total of 85. These 85 domains together include 37079 test questions.\\n\\nIt can be seen that 405B generally outperforms 70B, and 70B generally outperforms 8B, which is expected. \\n\\nTo further demonstrate the ability of peer prediction scores to tell better answers from worse ones, we also found consistently positive correlation between peer prediction scores (which, recall, is calculated without ground truth labels) and ground-truth accuracy (**Figure 7**).\"}", "{\"comment\": \"> Peer prediction empirically remains truthful when half answers are deceptive, but LLM-as-a-Judge could not remain truthful when half judgments are deceptive.\\n\\nWhile I see the intuition behind this, this claim seems quite dependent on implementation details. In particular, it seems likely to me that your observations would not hold up with deceptive participants that are optimized to break the mechanism. Also, I would not be surprised if you'd observe similar results with LLM-as-juge, depending on the specific implementation. Further experiments substantiating this could strenghten future versions of the paper.\"}", "{\"comment\": \"Another clarification question:\\n\\nDo you think of your proposed method as a method for general model evaluation, or as a method for detecting deception more specifically? Some of the framing seems to point at the former, while most experiments are focused on the latter.\"}", "{\"title\": \"Rebuttal [3/3]\", \"comment\": \"> claims such as \\\"the method [is] applicable to superhuman models\\\" and \\\"enabling reliable evaluation of superhuman models without trusted supervision.\\\"\\n\\nThank you for pointing this out! We agree in hindsight, and are tuning down the strength of these claims in the revised draft. Those expressions were originally meant to convey our motivation for carrying out the research, and we didn't intend to claim that we've fully solved these problems in any sense of the word, but we now realize our phrasing seems to have indicated otherwise. We regret our mistake, and have corrected them in the revised draft.\\n\\n> Nitpicks:\\n\\nWe think these are good points - thanks! We have addressed all these issues in the revised draft, except the role naming one; we thought hard about this, but unfortunately failed to find a better set of names. Please let us know if you have suggestions about alternative naming schemes!\\n\\n> In theorem 2, why is there a probability 1-\\u03b4 if the equilibrium is supposed to be ex-ante?\\n\\nThanks for catching this! The final sentence of Theorem 2's statement should be corrected as \\\"...is, ex ante (when the distribution $\\\\mathcal D$ and the instantiation of all $\\\\mathcal P^{\\\\mathrm{J}}_j$ are known by the agents), an $\\\\epsilon$-BNE.\\\" In other words, we consider the priors of the jurors as already instantiated, and $\\\\delta$ is a probability about what those instantiated priors look like. Indeed, part of formula $(3)$ comes exactly from applying tail inequalities to the instantiation of the jurors' priors.\\n\\n> Do the results shown in figure 1 look qualitatively similar when only multiple choice or only open-ended tasks are considered?\\n\\nMost questions in the dataset are either multiple-choice or short-answer questions (e.g. a number theory problem with a number as the answer). We ask the participants to provide detailed reasoning along with the answer, and evaluate their reasoning along with the answer, thereby effectively using the questions as open-ended ones.\\n\\nIn our newly added **Figure 7**, we add the *accuracy* metric, which matches the models' output choice/short answer against the ground-truth choice/short answer (ignoring the reasoning), resulting in a 0/1 score for each question. We use Gemma2-27B to perform this identity checking (since robustly extracting the choice/short answer from the output and then checking semantic identity is non-trivial), and spot-checked its judgments to ensure reliability.\\n\\nCorrelation plots in **Figure 7** suggests that peer prediction score is a reliable proxy of ground-truth accuracy on our dataset.\\n\\n> For many tasks, the gap between the 70B model and the 405B one seem small, and the 70B model even performs better in about 20% of tasks. How do these gaps relate to gaps in the models' performance on these tasks according to the ground truth (as well as according to model-as-a-judge)?\\n\\n**Figure 7(b)** finds consistently positive correlations between peer prediction score gaps and ground-truth accuracy gaps, for all pairs of models.\\n\\n> It looks like the surplus actually increases in \\u03b1. Why are no values of \\u03b1 larger than one included?\\n\\nWe had hoped to include a wider range of $\\\\alpha$, but did not have the time to do so. We have now plotted those other values in **Figure 9**, with $\\\\alpha=-1$ remaining the best-performing exponent - turns out it's a unimodal rather than monotone relationship.\\n\\n> What are the key elements of the proofs that distinguish them from the ones in in Schoenebeck & Yu (2023)?\", \"for_theorem_1\": \"As mentioned in the draft right before the proof, the general idea is the same. The key difference is in extending Schoenebeck & Yu (2023)'s proof from their 3-agent setting to our $n$-agent setting, still using their techniques.\", \"for_theorem_2\": \"The proof of theorem 2 is quite different, and we don't think there is an analog/counterpart in Schoenebeck & Yu (2023). One could intuitively think of it as theorem 1 + generalization bound (in the statistical learning theory sense), where each agent optimizes against a finite sample of fellow agents drawn from $\\\\mathcal D$, and we need to show that optimization against this sample doesn't deviate too far away from optimization against $\\\\mathcal D$ itself. The general direction of Theorem 2's proof is thus similar in spirit to proofs of statistical generalization bounds, but using quite different techniques (U-statistics, instead of VC dimension or Rademacher complexity).\\n\\nWe have added these explanations into our revised draft as Remark 2.\\n\\n---\\n\\nThank you again for the extremely helpful review. Please let us know what you think!\"}", "{\"comment\": \"Thank you for the detailed response.\\n\\nAs the discussion period ends in a few days and thorougly evaluating the new experiments likely requires another detailed read of the paper, it is likely that I won't be able to reply in more detail before the end of the discussion period. I do however promise to go through the newly presented evidence and potentially update my score, before the AC decisions.\", \"some_immediate_questions_from_a_shallow_reading_of_the_response\": \"> To clarify, the scaling results in Figure 1 already fix participant model sizes within each subplot, and only vary the jury model size \\n\\nIs the jury model the same as the juror model?\\n\\n> The witness does not see the defendant's answer, so the \\\"perfect reward by encoding\\\" claim is only true when the witness has all the private information that the defendant has.\\n\\nIf I understand correctly, the same statement would be correct in a slightly modified form, replacing \\\"perfect\\\" by optimal (given the available information) and \\\"encoding the answer\\\" by \\\"providing maximal information about the answer in a way only the juror can understand\\\". Is that correct?\\n\\n> The only feedback information LLM-as-a-Judge uses is that from the trusted judge (must be trusted because the judge's evaluation is used at face value without verification), while peer prediction can utilize the capabilities of untrusted models by having them as fellow participants - something LLM-as-a-Judge is unable to accommodate.\\n\\nJust to clarify, this statement is derived from the game-theoretic analysis? If in practice someone gave me an LLM to use a a juror/defendant without a way to verify that it follows the incentives of the game you defined, there is no guarantee that the method would actually work, right? \\n\\n> Theorem 2 can be directly extended to the case where each participant has their own \\\"prior over priors\\\" \\n\\nIf that is indeed correct, I would suggest replacing theorem 2 by a statement like this, as it seems like a substantially more realistic setting.\"}", "{\"summary\": \"The authors propose \\\"peer prediction\\\" as a novel method to evaluate LLMs without requiring trusted supervision or ground truth labels. The method works by measuring how well one model's answers help predict another model's answers through a jury system, with theoretical guarantees that honest and informative answers are optimal. Through theoretical analysis and experiments the authors demonstrate three key findings: (1) the method effectively distinguishes model capabilities across diverse domains, (2) it exhibits an inverse scaling property where resistance to deception actually increases as the capability gap between jury and participants grows larger, enabling reliable evaluation of superhuman models, and (3) the method's resistance to deception improves with larger participant and jury populations.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The idea of exploring mechanisms that exhibit game-theoretic incentive compatibility in model evaluation is pretty interesting and sufficiently new.\", \"The problem is important and well-motivated.\", \"Section 3 is well-written and from what I was able to check technically correct.\"], \"weaknesses\": \"I don't think the paper has any major technical issue, but it could be improved in terms of clarity for people not familiar with the mechanism design literature. Maybe add a background section in the appendix. The figures can also be improved by making the font larger and writing subfigure captions. Finally, I would also like to point that being more technical when defining terms like \\\"being more truthful\\\", \\\"superhuman\\\", is extremely helpful. I was able to understand the paper regardless and I understand the use of these non-technical terms has increased in this literature, but I recommend to be more precise in the final version of the work if possible.\", \"questions\": [\"Can you explain the practical implications of assumption 1? I think the paper lacks a discussion about practical implication of all the assumptions in Sec 3.\", \"I had a hard time understanding Fig 3, can you expand on the details of each subfigure?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal [2/2]\", \"comment\": \"> Can you explain the practical implications of assumption 1? I think the paper lacks a discussion about practical implication of all the assumptions in Sec 3.\\n\\nThat's a good point! We have included the discussion below as Remark 3, which we now refer to when presenting Assumption 1.\\n\\nLet's first examine the first part of Assumption 1, **(bounded) variability within prior** (VWP henceforth), which asks that PMI between different participants is bounded.\\n\\n*Pointwise mutual information (PMI) is a measure of how much the actual probability of a particular co-occurrence of events p(x, y) differs from what we would expect it to be on the basis of \\\\[sheer coincidence\\\\].* [2] *\\\\[It\\\\] draws on the intuition that the best way to weigh the association between two words is to ask how much more the two words co-occur in a corpus than we would have expected them to appear by chance.* [3]\\n\\nHere, PMI is instead taken over participants' answers - VWP is measuring the association between different participants, asking \\\"when Alice and Bob both answers D to the question, how much we expect that to be because they converge upon the truth, compared to sheer coincidence?\\\"\\n\\nThe second part, **(bounded) variability across priors** (VAP henceforth), asks that when two agents with disagreeing priors assign differing prior probabilities to \\\"another participant (e.g. Alice) giving a certain answer (e.g. D)\\\", the ratio between their probabilities is bounded. \\n\\n**Taken together**, there are usually two ways is which Assumption 1 is satisfied in the real world. Both are sufficient conditions, so we **only need one to be true**.\\n\\n1. **Lower-bounded probabilities** (VWP+VAP). In a 4-option multiple-choice question, likely everyone always assign no less than 0.5% probability to any option, just in case they are wrong. In this case, we can verify that VWP and VAP always hold.\\n2. **All participants have uncertainties about the answer** (VWP) and **participants all know that others have uncertainty too** (VAP). In this case, VWP is satisfied because when Alice and Bob both answers D, the \\\"sheer coinincidence\\\" explanation can now no longer be ruled out, given that both Alice and Bob's response has some randomness in it. VAP is satisfied because, if both you and I agree that Alice has some \\\"stable\\\" uncertainty between options A/B/C/D, we won't disagree catastrophically (e.g. by >1000x) on how likely it is for Alice to answer D.\\n\\nNote that these aren't necessary conditions, but rather two most plausible reasons for VWP/VAP being true in the real world; there are likely more of them.\\n\\n> it could be improved in terms of clarity for people not familiar with the mechanism design literature.\\n\\nGood point. We have thought about having an independent background section or expanding the related works section to introduce the mechanism design formalism. \\n\\nHowever, the field of algorithmic mechanism design has a very diverse range of formalisms (for example, the formalism of peer prediction is entirely different from that of auction design) [4], and so we decided to cover only the formalism of peer prediction to avoid distracting the reader. We did this introduction in Section 3. Please let us know if there are things in this introduction that we could a better job explaining!\\n\\n---\\n\\nThank you again for this helpful review! We would love to hear what you think, and would love to know if you have had updates upon reading our response or the new suite of experiment results.\\n\\n## References\\n\\n[1] Wen, Jiaxin, et al. \\\"Language models learn to mislead humans via rlhf.\\\" *arXiv preprint arXiv:2409.12822 (2024).*\\n\\n[2] Bouma, Gerlof. \\\"Normalized (pointwise) mutual information in collocation extraction.\\\" *Proceedings of GSCL 30 (2009): 31-40.*\\n\\n[3] Jurafsky, Daniel. *\\\"Speech and language processing.\\\" (2000).*\\n\\n[4] Nisan, Noam, and Amir Ronen. \\\"Algorithmic mechanism design.\\\" *Proceedings of the thirty-first annual ACM symposium on Theory of computing (1999).*\"}", "{\"title\": \"Response to 11.24 Followup\", \"comment\": \"Thank you for the prompt response!\\n\\nSince ~all extra content is concentrated in **page 14-17** (1 page of text + 3 pages of figures), please feel free to only read those new pages, without re-reading the rest of the paper.\\n\\nFeel free to follow up further! The discussion deadline has been extended to 2nd, though the PDF revision deadline remains 27th.\\n\\n> Is the jury model the same as the juror model?\\n\\nYes, that's a typo. Nice catch!\\n\\n> If I understand correctly, the same statement would be correct in a slightly modified form\\n\\nExactly, but the second half of our response was indeed aimed at such a modified critique. To quote our response:\\n\\n- *In general, while there can indeed be other BNE, honesty is (among) the highest-paying BNE, as shown in the \\\"achieves maximum ex-ante payoff among all equilibria for any agent\\\" part of Theorem 1.*\\n- *AI systems lie usually becauses lying gets them strictly more reward (as is the case in, for example, experimental works demonstrating deception [1,2,3]). The fact that honesty is among the highest-paying BNE (by Theorem 1) ensures that no other equilibrium can reward any agent more than honesty does, thus removing the incentive to lie, even though not banning deception in the strict sense.*\", \"here_we_are_saying\": \"regardless of how profitable encoding one's private information is (it need not be perfect reward!), there's always the option of \\\"not encoding it and share the private info as is\\\" that's at least as profitable.\\n\\nThis is true in general, not only for encoding-style deception, but also for all other non-honest equilibria. This should remove the incentive for deception.\\n\\n\\n> If in practice someone gave me an LLM to use a a juror/defendant without a way to verify that it follows the incentives of the game you defined, there is no guarantee that the method would actually work, right?\\n\\nGreat question. Theorem 1,2 don't provide theoretical guarantee in that case, but we do have experimental evidence (Figure 5/6/12 where \\u2265half of the participants are deceptive) that peer prediction usually functions well as a measure of truthfulness **even outside the honest equilibrium** defined in Theorem 1/2. In other words, peer prediction can tolerate a decent amount of deception by multiple agents, while still maintaining discriminative power wrt deception.\\n\\nTo solve the root of this lack-of-incentive issue, one may also train/finetune models using peer prediction scores as reward, essentially performing unsupervised training for truthfulness. This would make the peer prediction score the explicit optimization objective of the LM policy, assuming no goal misgeneralization ([Langosco et al.](https://arxiv.org/abs/2105.14111)).\\n\\nIf this works, such an unsupervised truthfulness training process could use training data beyond the volume/difficulty limits of human annotation. It could be implemented in the model alignment pipeline. *Whether it indeed works is a hypothesis to be tested by training experiments, not a foregone conclusion.* \\n\\nGiven the existing volume/workload of this paper, and the volume/workload/downstream analysis required by the training experiments, we think they would merit a followup paper. (Exploring a new method outside the Overton window is a lot of work, yet dividing the work into paper-sized chunks might mean facing critiques to each chunk due to the information left in the other chunks, which is a bit of a difficult situation.)\\n\\n> replacing theorem 2 by a statement like this, as it seems like a substantially more realistic setting\\n\\nMakes sense! We've now explicitly included in Theorem 2's statement that \\\"The same is true when agents hold disagreeing 'prior over priors' $\\\\mathcal{D}_i$,\\\" and laid out the reduction (from the disagreeing $\\\\mathcal{D}_i$ case to the shared $\\\\mathcal{D}$ case) in the appendix.\\n\\nWe intentionally avoided a strictly formal definition of what \\\"hold disagreeing priors over priors\\\" means, and instead went for an intuitive treatment (which should be clear on its own!), since these are 3rd-order belief hierarchies, and a strict formalism could be confusing to read and distracts from the core idea.\\n\\n> a method for general model evaluation, or as a method for detecting deception more specifically?\\n\\nThe most accurate description would be *peer prediction evaluates **truthfulness***, i.e. the extent to which the evaluated answer matches the truth. \\n\\nA distinction famously exists between truthfulness and honesty (cf. Diagram 2 in [Evans et al.](https://arxiv.org/pdf/2110.06674)): honesty requires saying things the model itself believes to be true (even if the belief could be mistaken), while truthfulness requires saying things that are actually true. Truthfulness asks the model to be not only *honest* but also *informed*.\\n\\nAs a result, we conducted both model honesty experiments (e.g. Figure 1,5) and model informativeness/correctness experiments (e.g. Figure 3,7) when testing the validity of peer prediction as an evaluation metric of truthfulness.\"}", "{\"title\": \"Thank You\", \"comment\": \"Thank you for your response and your evaluation efforts! We respect that you've made your decision, and decided to voice our disagreements nonetheless, if only for reference. Please feel free to ignore these remarks.\\n\\n- On the concern around repeating the witness answer: **(1)** Peer prediction empirically remains truthful when $\\\\geq$half answers are deceptive, but LLM-as-a-Judge could not remain truthful when $\\\\geq$half judgments are deceptive. **(2)** Conceptually, it's because the jury *uses its reasoning capabilities to exploit the **logical** cross-consistency that's distinct to truthful answers* (cf. the \\\"mistaken mathematical calculation\\\" example in our discussion thread). This fundamentally sets the jury apart from a mere repeater, and it's exactly this difference that led to the distinctive empirical property in (1).\\n\\n- On broken files: We looked into the issue, and it seems like Anonymous Github somehow also scanned our compressed files for de-anonymizing strings, which broke the files. We have uploaded the files to [Github](https://github.com/AnonZQ5M3VY9/Peer-Pred-Data) instead. Please feel free to ignore them though.\\n\\nFinally, we'd like to express our appreciation for the highly helpful comments you've made throughout this discussion.\"}", "{\"title\": \"Rebuttal [1/3]\", \"comment\": \"Thank you, Reviewer 73k9, for your thoughtful review!\\n\\n## Summary of Changes\\n\\nIn our newly uploaded manuscript, most new content are concentrated in **page 14-17**. Namely,\\n- Results from a range of new experiments (**Figure 5,6,7,8,9** newly added), including:\\n - Scaling experiments on fully heterogeneous participants (**Figure 5**). Results suggest that peer prediction indeed checks for truthfulness as opposed to mere similarity.\\n - Scaling experiments on populations containing half or more deceptive participants (**Figure 5,10,11,12**; the last 3 already existed in the submitted manuscript). The results also suggest that peer prediction checks for truthfulness as opposed to mere similarity.\\n - Scaling experiments with realistic, RLHF-trained deceptive model (**Figure 6**), based upon [MisleadLM](https://huggingface.co/jiaxin-wen/MisleadLM-QA) [1]. Results suggest that peer prediction continues to function in such settings akin to real-world deployment.\\n - Correlating peer prediction scores with ground truth accuracy, at a domain level (**Figure 7**). Results confirm that peer prediction rewards accurate answers.\\n - Visualizations of score distributions (**Figure 5,6**), for intuition-building purposes.\\n - Scaling properties of peer prediction with jury population size, for a wider range of aggregation exponent $\\\\alpha$ (**Figure 8**). $\\\\alpha=-1$ is confirmed to be a sweet spot.\\n - Scaling properties of *counterfactual* deception resistance (**Figure 9**). Results confirm that under peer prediction, honesty is a dominant strategy for participants.\\n- Updated the main text according to presentation suggestions from reviewers; please see the detailed reply below for further explanations.\\n- [Anonymized repository](https://anonymous.4open.science/r/Peer-Prediction-LLM-Eval-EB1E/) for reproducibility.\\n\\n## Response to Questions and Suggestions\\n\\n> Could you add an ablation using different models rather than instances of the same model as participants? That would be useful to rule out the following explanation for inverse scaling:\\n\\nWe agree that it's a possibility that would significantly reduce peer prediction's value.\\n\\n- Null Hypothesis: Peer prediction (PP henceforth) works only because honest answers are similar to each other (and likewise, dishonest answers are similar to each other), but the population contains a majority of honest participants, and PP basically evaluates an answer's similarity with the majority.\\n\\nWe noticed this possibility after submission, and have been conducting validation experiments on it. Specifically, there are two ways to test the hypothesis above:\\n\\n1. By having fully heterogeneous participants (using **all different models**), as you have suggested.\\n2. By making the population contain **an equal number of honest and deceptive participants**. (Note though this constraint is overly strong and might lead to false negatives, as it introduces an extra element of collusion) \\n\\nIn **Figure 5**, we show experiment results testing both (1) and (2). Performance trends (such as the inverse scaling property) are consistent with those in Figure 1, thus **rejecting the null hypothesis**.\\n\\nAdditionally, Row 2 of Figure 12 shows the detailed statistics when the population (4 participants) is exactly half-deceptive and half-honest, and the jury is SmolLM-360M (one of our smallest jurors). In this case, honest participants win out, refuting the hypothesis, which is good news. In fact, even when **75%** of the population are deceptive, honest participants still win out.\\n - Note that there is uncertainty as to the range of jurors that this result applies to. For instance, in some of the previous results (Figure 11), when participants and jurors are around the same size (which are the cases that peer prediction performs least competently in), deceptive participants may actually win when the population is half-and-half.\\n\\n> Some additional information on why LLM-as-a-judge was implemented the way it is (for example, no few-shot prompting) would also be helpful to better assess\\n\\nWe have added experiments where LLM-as-a-Judge is implemented with few-shot prompting (6-shot for Qwen, 3-shot for SmolLM due to modest context window size), and results remain consistent; please see Figure 5 and 6. Examples are annotated by GPT-4o.\\n\\n> The improvements could be caused by the increasing capability of the peers rather than the increasing capability of the evaluated model. An ablation in which the peer models are fixed to the juror model's size would be useful here. [...] Could you add a plot with the judge size rather than the evaluated model's size varying on the x-axis?\\n\\nTo clarify, the scaling results in Figure 1 **already fix participant model sizes** within each subplot, and only vary the jury model size. We have updated the caption to make it clear!\"}", "{\"title\": \"Response to 11.28 Followup\", \"comment\": \"Dear Reviewer 73k9,\\n\\nThanks for the response, and thank you for your evaluation efforts! We have shared a lot of our thoughts, and we appreciate that you have sticked with us.\\n\\nHere's a [new anonymized repository](https://anonymous.4open.science/r/Peer-Prediction-Data/README.md) containing the requested full data files. The questions are randomly shuffled, so feel free to take any sample from it. Please let us know if there are other files you'd like to see, or if you have further questions!\\n\\n> Could you provide any intuition/explanation, on why you believe the inverse scaling results are more than an artifact of the specific evaluation method, and would generalize?\\n\\n- When the jury and the defendant have very similar capability levels, the jury can mostly predict the defendant's answer without much help from the informant, as it already has most of the objective-level knowledge that the defendant and the informant have.\\n- When the defendant is stronger than the jury, the jury can no longer single-handedly predict the defendant's answer, and rely more on the informant. The honest informant thus gets more reward by helping the jury make better predictions; the deceptive informant in turn gets lower rewards, as it now harms the jury's prediction quality to a larger extent.\\n\\n**Intuitively**, Alice teaching a 10 year old to solve an AMC math question is stronger indicator of Alice's expertise than if she taught a high school student to solve the same question, as the high schooler may be able to do 80% of the work even without Alice's help.\\n\\n**To compare the intuition with our theory**: in Theorem 1/2, we are using private information to model capability gaps, which is accepted practice in theoretical modeling of alignment [[1](https://arxiv.org/abs/2402.17747)]. Such modeling is most accurate when capability gaps are large - when Bob is immensely outsmarted by Alice, it makes Alice's thoughts literally beyond Bob's reach, making those thoughts effectively private information. This may mean that in practice, Theorem 1/2 applies most squarely in cases where capability gaps are large; and these are exactly the cases where peer prediction perform the best. Since they are also the worst-case/highest-priority scenarios for safety and alignment, we think this would not harm the practical applicability of peer prediction.\\n\\n---\\n\\nIf we erase the jury's prior knowledge about the question at hand, making it entirely dependent on the informant, we predict that the discriminative power of peer prediction will be enhanced in cases where model capability gaps are smaller (jury and participants being closer in capabilities), seemingly reducing/reverting the inverse scaling trend. If this is true, peer prediction's performance will become uniformly better than the results we presented, which should be good news. However, we suspect that doing such erasing is largely equivalent to reducing model size (and thus reducing capabilities in general) - exactly the x-axis of the inverse scaling trend. Thus, even if it's true, it wouldn't really challenge the inverse scaling property.\\n\\nFinally, we agree that a never-ending inverse scaling property is too good to be true. At some point, e.g. when the jury become weak enough that it can't even reason properly, we expect the trend to be reversed, where larger capability gap means lower discriminative power. However, we have used the smallest modern LLM we can find (SmolLM-135M) as jury, and at least until SmolLM-135M, the inverse scaling trend continues.\\n\\nWe were also initially surprised to discover the inverse scaling property, and have experimented with changing the mechanism (e.g. disallowing the jury to see past informant answers during in-context learning, to prevent the jury from identifying and adapting to deceptive informants), in order to rule out setting specificity. However, results stay broadly consistent in those experiments, which convinced us of the explanation given above. We don't currently have those experiment results in an organized/presentable manner, but could organize/redo them if needed.\"}", "{\"metareview\": \"## Summary:\\nThis paper addresses the challenge of evaluating language models without reliable supervision, especially for complex tasks where trusted supervision is lacking. It introduces a peer prediction method inspired by game-theoretic incentive compatibility from mechanism design literature. This method distinguishes between honest and informative answers and deceptive ones without relying on ground truth labels. The paper demonstrates the effectiveness and resistance to deception of peer prediction through theoretical guarantees and empirical validation on models up to 405B parameters. Compared to the LLM-as-a-Judge approach requiring strong and trusted judges, peer prediction exhibits an inverse scaling property where resistance to deception strengthens as the gap between the jury and participants widens. This enables reliable evaluation of powerful models without trusted supervision, showcasing the potential for game-theoretic resistance to model deception in alignment and evaluations.\\n\\n## Strengths:\\n1. The paper introduces a novel method of evaluating LLMs using other LLMs without requiring reliable supervision or ground truths. Compared to LLM-as-a-judge approaches, it highlights the resistance to deception and the inverse scaling property (weaker/smaller jury + stronger/larger participants improves the evaluation). \\n1. The paper incorporates \\\"game-theoretic incentive compatibility\\\" principles, which are interesting to the community and well-motivated. They also provided mathematical proofs for the theorems presented. Theorem 2 can be of independent interest.\\n1. The experiments are comprehensive, covering various tasks and models. \\n1. The paper is well written and clear in presentation. \\n\\n## Weaknesses:\\n1. The observed inverse scaling property might be a trivial consequence of better answers from the larger honest witnesses and that the smaller jury model simply follows the answers without much extra reasoning or judgment. Moreover, it is not intuitive to get improved evaluations from worse jury models without a convincing analysis. And it is not clear when the property no longer hold if we keep decreasing the jury size. Although the intuitions presented in the discussion are helpful, they do not provide sufficient information to exclude the possibility. \\n1. The paper overclaims the importance of their contribution to evaluating superhuman models since the experiments are conducted on non-superhuman open-source models. The authors revised the statements accordingly later by removing \\\"superhuman models\\\" but this also weakens the motivation of \\\"no trusted supervision for superhuman models\\\". \\n1. The motivation from \\\"the lack of trusted supervision\\\" might not hold for many practical tasks and applications where the ground truths or rewards are not very expensive to attain. This may limit the scope of this paper. \\n1. There is a gap between the theory and the empirical results: the theory assumes that the majority of participants know the truths, while the empirical results show a better robustness to >50% deceptive participants. This also may imply that the majority vote might be a much simpler but hard baseline to beat (even when >50% are deceptive peers as long as their answers are diverse but honest peers' answers are consistent). \\n1. The scalability and computational cost of the proposed method can be much worse than the LLM/human judge approaches as it requires inferences on multiple models. \\n\\n\\n## Decision:\\nThe authors provided detailed clarifications and additional experimental results in the rebuttal, as requested by the reviewers. Two out of the three reviewers responded to the rebuttal and actively participated in multiple rounds of in-depth discussions with the authors. The remaining reviewer who has not responded voted for borderline acceptance with relatively low confidence. Although some important concerns have been addressed by the rebuttal and discussion, the two reviewers in the discussion still decided to keep their original ratings since several significant issues have not been comprehensively investigated as summarized in the above weakness session. The meta-reviewer carefully read all the discussions and the paper as well. Despite the importance of the studied peer evaluation, the key assumptions and problem setups in this paper are not fully convincing given the current theoretical and empirical results. The authors are encouraged to revise the draft according to the discussion before submitting it to the next conference.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided detailed clarifications and additional experimental results in the rebuttal, as requested by the reviewers. Two out of the three reviewers responded to the rebuttal and actively participated in multiple rounds of in-depth discussions with the authors. The remaining reviewer who has not responded voted for borderline acceptance with relatively low confidence. Although some important concerns have been addressed by the rebuttal and discussion, the two reviewers in the discussion still decided to keep their original ratings since several significant issues have not been comprehensively investigated as summarized in the above weakness session. The meta-reviewer carefully read all the discussions and the paper as well. Despite the importance of the studied peer evaluation, the key assumptions and problem setups in this paper are not fully convincing given the current theoretical and empirical results. The authors are encouraged to revise the draft according to the discussion before submitting it to the next conference.\"}", "{\"title\": \"Rebuttal [2/2]\", \"comment\": \"> According to weakness 3, are you available to provide more experiments considering different combinations of the participant's model?\\n\\nCertainly. In addition to the model combinations we already presented (`Llama3.1-8B/70B/405B` + `Mistral-7B-v0.3` for informativeness, [`Llama3.1-8B` or `Gemma-2-2B/27B`]+ [`Qwen2.5-0.5B/1.5B/3B/7B` or `SmolLM-135M/360M`] for honesty), we have added experiments with fully heterogeneous participants in **Figure 5** (`Llama3.1-8B`, `Gemma-2-9B`, `Mistral-7B-v0.3`) and **Figure 6** (`Llama2-7B`, `Gemma-2-9B`, `Mistral-7B-v0.3`), and also populations with varying honest-deceptive compositions ($25\\\\\\\\%$, $50\\\\\\\\%$, $75\\\\\\\\%$) in **Figure 12**. These experiments show that peer prediction is overall effective in incentivizing honesty across a wide range of participant combinations.\\n\\n> This method will be really resource-consuming when considering superhuman models. \\n\\nIn fact, for models that push the frontier of AI capabilities, we believe that peer prediction induces **lower** cost than alternative methods for evaluation. \\n\\nCurrently, the only widely accepted \\\"gold standard\\\" for evaluating AI models is human judgment [8,9], which is expensive and time-consuming (not to mention its susceptibility to deception [3,4,5,6]). \\n\\nAs for AI-based evaluation methods such as LLM-as-a-Judge, they are typically only used for *evaluating weaker models using stronger models* (e.g. using GPT-4o to evaluate outputs of a finetuned Llama 8B), not the other way around (which is widely considered unreliable). As a result, evaluation of newly released frontier models (whose capabilities surpass existing models) is still done using human judgment or human-written datasets [9], which is expensive and time-consuming.\\n\\nPeer prediction, on the other hand, is a scalable and deception-resistant evaluation method that can be used to evaluate frontier models, while being much cheaper and faster than human evaluation. \\n\\n> \\u201cdistinguish better models from worse ones\\u201d is not a good evaluation metric to me as it is a relative result instead of an absolute one, which means it will have more limitations. For example, you need to find appropriate models for comparison when testing\\n\\nWe believe that a relativist evaluation metric is no less useful than an absolutist one, and indeed, the former can be easily converted to the latter by using, for example, the Elo score system. \\n\\nTake **LMSYS Chatbot Arena** [9] for example, where models are paired up and compared against each other in a tournament-style competition, using human evaluation. While each round of comparison is relativist, Chatbot Arena's Elo score system converts these relative comparisons into an absolute ranking of models, where each model is assigned a numerical score that reflects its overall performance. Today, Chatbot Arena is the *de facto* gold standard for benchmarking chatbots.\\n\\nWe believe that a similar system can be implemented for peer prediction, where models are paired up and compared against each other in a tournament-style competition, and the results are converted into an absolute ranking of models using the Elo score system. This approach would share the same benefits as Chatbot Arena, but at a tiny fraction of the cost and time required, since peer prediction, unlikely Chatbot Arena, requires no human evaluation.\\n\\nWe consider the construction of such evaluation infrastructure subject of future work, while our current paper aims to lay down the methodological foundations.\\n\\n---\\n\\nThank you again for the highly insightful feedback! Please let us know what you think, and we encourage you to reevaluate our work in light of the new experiments and explanations we have provided.\\n\\n## References\\n\\n[1] Bengio, Yoshua, et al. \\\"Managing extreme AI risks amid rapid progress.\\\" *Science 384.6698 (2024): 842-845.*\\n\\n[2] Shevlane, Toby, et al. \\\"Model evaluation for extreme risks.\\\" *arXiv preprint arXiv:2305.15324 (2023).*\\n\\n[3] Park, Peter S., et al. \\\"AI deception: A survey of examples, risks, and potential solutions.\\\" *Patterns 5.5 (2024).*\\n\\n[4] Wen, Jiaxin, et al. \\\"Language models learn to mislead humans via rlhf.\\\" *arXiv preprint arXiv:2409.12822 (2024).*\\n\\n[5] Lang, Leon, et al. \\\"When Your AIs Deceive You: Challenges of Partial Observability in Reinforcement Learning from Human Feedback.\\\" *arXiv preprint arXiv:2402.17747 (2024).*\\n\\n[6] Williams, Marcus, et al. \\\"Targeted Manipulation and Deception Emerge when Optimizing LLMs for User Feedback.\\\" *arXiv preprint arXiv:2411.02306 (2024).*\\n\\n[7] Kong, Yuqing, and Grant Schoenebeck. \\\"An information theoretic framework for designing information elicitation mechanisms that reward truth-telling.\\\" *ACM Transactions on Economics and Computation (TEAC) 7.1 (2019): 1-33.*\\n\\n[8] Chen, Guiming Hardy, et al. \\\"Humans or llms as the judge? a study on judgement biases.\\\" *EMNLP 2024.*\\n\\n[9] Chiang, Wei-Lin, et al. \\\"Chatbot arena: An open platform for evaluating llms by human preference.\\\" *arXiv preprint arXiv:2403.04132 (2024).*\"}" ] }
EVuANndPlX
GNN-RAG: Graph Neural Retrieval for Large Language Model Reasoning
[ "Costas Mavromatis", "George Karypis" ]
Retrieval-augmented generation (RAG) in Knowledge Graph Question Answering (KGQA) enriches the context of Large Language Models (LLMs) with retrieved KG information based on the question. However, KGs contain complex graph information and existing KG retrieval methods are challenged when questions require multi-hop information. To improve RAG in complex KGQA, we introduce the GNN-RAG framework, which leverages Graph Neural Networks (GNNs) for effective graph reasoning and retrieval. GNN-RAG consists of a graph neural phase, where the GNN retriever learns to identify useful graph information for KGQA, e.g., when tackling complex questions. At inference time, the GNN scores answer candidates for the given question and the shortest paths in the KG that connect question entities and answer candidates are retrieved to represent KG reasoning paths. The paths are verbalized and given as context to the downstream LLM for ultimate KGQA; GNN-RAG can be seamlessly integrated with different LLMs for RAG. Experimental results show that GNN-RAG achieves state-of-the-art performance in two widely used KGQA benchmarks (WebQSP and CWQ), outperforming or matching GPT-4 performance with a 7B tuned LLM. In addition,GNN-RAG excels on multi-hop and multi-entity questions outperforming competing approaches by 8.9--15.5\% points at answer F1. Furthermore, we show the effectiveness of GNN-RAG in retrieval augmentation, which further boosts KGQA performance.
[ "Knowledge Graph", "Large Language Models", "Retrieval-Augmented Generation" ]
Reject
https://openreview.net/pdf?id=EVuANndPlX
https://openreview.net/forum?id=EVuANndPlX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uw4IBKEhyE", "spr77A8Zay", "kNUKGRhq7t", "iYsm88nyed", "Nn2oDejhR1", "GairMxw9W3", "8rEs4x2YPr" ], "note_type": [ "decision", "official_review", "official_review", "official_review", "official_review", "official_review", "meta_review" ], "note_created": [ 1737524061686, 1729586717257, 1729524838830, 1730275827905, 1730456111150, 1730291436696, 1735002644881 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10558/Reviewer_7Tku" ], [ "ICLR.cc/2025/Conference/Submission10558/Reviewer_TceY" ], [ "ICLR.cc/2025/Conference/Submission10558/Reviewer_EGFX" ], [ "ICLR.cc/2025/Conference/Submission10558/Reviewer_sWgu" ], [ "ICLR.cc/2025/Conference/Submission10558/Reviewer_3FK4" ], [ "ICLR.cc/2025/Conference/Submission10558/Area_Chair_1sx4" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper introduces GNN-RAG, a KGQA (Knowledge Graph Question Answering) algorithm that combines GNN (Graph Neural Networks) with LLM reasoning. This method uses a GNN-based KGQA algorithm to retrieve reasoning evidence from a KG, allowing the LLM to generate the final answer. Experiments show that this approach effectively improves the performance of GNN-based KGQA methods and outperforms other LLM-based KGQA algorithms in terms of accuracy and efficiency.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. GNN-RAG uses a GNN-based KGQA algorithm to obtain candidate answers to the question, then samples the shortest path between the topic entity and candidate entities as reasoning evidence, allowing the LLM to generate the final answer. Experimental results demonstrate that the GNN-based KGQA algorithm can effectively retrieve reasoning evidence from the KG, and the fine-tuned LLM can accurately infer results from the evidence.\\n\\n2. Compared to existing LLM-based KGQA methods, GNN-RAG shows better efficiency and accuracy on two KGQA datasets.\\n\\n3. The paper provides thorough case studies and theoretical analysis on the strengths and weaknesses of the GNN approach.\\n\\n4. The structure of the paper is clear and easy to follow.\", \"weaknesses\": \"1. Using existing KGQA methods to retrieve information from a KG and then allowing a LLM to perform reasoning based on the retrieved information is a rather intuitive idea. In some LLM-based KG completion works, similar approaches have been used, where traditional KG completion models retrieve relevant triples and candidate entities from the KG, and then LLMs use their semantic understanding to perform reasoning [1][2]. This work seems to have simply applied a similar idea to the KGQA task. The retrieval stage follows existing GNN-based KGQA methods without providing sufficient theoretical analysis of the GNN's advantages in multi-hop information retrieval, and the utility of GNN-based KGQA methods is also intuitive since GNN models currently perform the best among existing IR-based KGQA methods. Moreover, the reasoning stage follows the design of RoG [3], so the innovation of this method is rather limited.\\n\\n2. This approach requires training two models, namely a KBQA model and fine-tuning a LLM. While the approach has an efficiency advantage during inference, it demands a large amount of labeled training data, which is difficult to obtain in real-world scenarios. The paper does not sufficiently discuss the costs of model training, such as how much data is used to fine-tune the LLM, the time required to train the KBQA model, or how changes in training data affect the method's performance.\\n\\n3. As shown in Table 5, this method is constrained by the GNN-based KGQA model, and when the performance of the GNN-based KGQA model is poor, the overall model's performance declines significantly. This point is also mentioned in the limitations section of the paper.\\n\\n[1] Wei, Yanbin, et al. \\\"KICGPT: Large Language Model with Knowledge in Context for Knowledge Graph Completion.\\\" Findings of the Association for Computational Linguistics: EMNLP 2023. 2023.\\n[2] Liu, Yang, et al. \\\"Finetuning Generative Large Language Models with Discrimination Instructions for Knowledge Graph Completion.\\\" arXiv preprint arXiv:2407.16127 (2024).\\n[3] LUO, LINHAO, et al. \\\"Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning.\\\" The Twelfth International Conference on Learning Representations.\", \"questions\": \"1. What is the cost of model training? For example, how much data was used to fine-tune the large language model, and how long did it take to train the KBQA model?\\n\\n2. How does changing the amount of training data (either fine-tuning data or KBQA model training data) affect the performance of the method?\\n\\n3. In Table 6, are the results of Alpaca-7b based on the Alpaca-7b model fine-tuned for this task, or are they based on the original Alpaca-7b model? Would fine-tuning a more powerful LLM yield better results?\\n\\n4. If only RA is used for knowledge retrieval, how would the performance change?\\n\\n5. For the KGQA task, constructing QA training data versus constructing QA data with SPARQL queries is not significantly different since the groud truth answers need to be validated with SPARQL queries. If SOTA SP-based KGQA methods are used for obtaining candidate entities, would its performance be superior to that of the GNN-based method, especially in cases of 1-hop queries?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a Retrieve-augmented generation (RAG) framework named GNN-RAG for the knowledge graph question answering~(KGQA) task. GNN-RAG first employs a GNN to find plausible candidate answers for the given question, and extract the shortest path between the topic entity and candidate answers as reasoning paths. Then, the framework finetunes and employs an LLM to answer the question based on these candidate entities and related reasoning paths. GNN-RAG achieves the SOTA performance on the WebQSP and the CWQ datasets. The paper also makes direct and detailed comparisons between the proposed method and its competitive baseline RoG.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1) Key Original Contribution: This paper proposes to utilize GNN to retrieve and select proper candidate answers for the given question, which effectively alleviates the limitation of LLM-based KGQA methods. Note: The Key contribution belongs to the \\u201cRetrieval\\u201d part.\\n\\n2) Compared to LLM-based baseline methods, GNN-RAG is simple and effective. It achieves state-of-the-art performance on WebQSP with Llama2-7B model. The performance on CWQ is slightly lower than ToG with GPT4 as the backbone LLM.\\n\\n3) The authors of this paper conducted detailed comparisons between RoG and the proposed GNN-RAG, and examined the effectiveness of GNN-RAG, RoG, and ToG on different kinds of LLMs.\", \"weaknesses\": \"1) Although the proposed method demonstrates satisfactory performance, it is \\u201crelatively\\u201d incremental compared to ToG and RoG. The retrieval of reasoning paths with GNNs is not a brand-new idea in the QA domain [1]. Apart from the method of path retrieval, less essential differences can be found between RoG and GNN-RAG. Nevertheless, this reviewer does not believe that the paper should be rejected based solely on this reason.\\n\\n2) This paper lacks a detailed explanation of the design of the GNN. Specifically, it could be beneficial to explicitly model the classification step with equation(s). This reviewer does not know how the authors perform the classification with the entity embeddings outputted by the L-th layer. He can only make an educated guess that the classification operation is conducted by an FC layer followed by a softmax / sigmoid layer. \\n\\n3) It could be beneficial to take UniKGQA (or ReaRev) and/or ToG into comparison in Table 3 and 4 since Figure 3 mentioned the landscape of three types of existing methods.\\n\\n4) The paper did not discuss the time cost for LLM fine-tuning and inference. This reviewer is surprised to see that the proposed method, with a 7B model, outperforms baselines with a 70B model or GPT. However, the efficiency of the proposed method cannot be judged solely by the number of LLM calls required during the inference stage. GPU hours for LLM fine-tuning is also an important metric.\\n\\n[1] Relation-aware Bidirectional Path Reasoning for Commonsense Question Answering (CoNLL\\n2021)\", \"questions\": \"1) How does GNN-RAG initialize the entity / relation embeddings? It would be beneficial to explicitly state whether the framework uses randomly initialized or pre-trained embeddings, or whether these embeddings are optimized with GNN parameters. '\\n*(This is crucially important especially for the IC\\\"LR\\\" conference.)*\\n\\n2) How to emphasize k in equation 3? Is that means, given a fixed LM, we can have k different embeddings for a specific question? Or, the question embedding is pooled based on the final layer output of each of the tokens? Please *explicitly* state the structure of the mentioned \\\"attention-based pooling neural network\\\" with equations.\\n\\nThis reviewer sincerely requests the authors to enrich the implementation details in section 4.1. The current submission is insufficient for readers to reproduce the experimental results without checking the source code. Spaces are available to add these details since the paper does not fill 10 pages. \\n\\n3) Would you mind adding a small section and/or a table to discuss the time cost of LLM fine-tuning?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper present GNN-RGA, a novel graph neural method for enhancing RAG in KGQA with GNNs. GNN-RAG achieves state-of-the-art performance in two widely used KGQA benchmarks (WebQSP and CWQ).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This is a solid study with reasonable technical contributions. The large number of baseline models used in the experiments is impressive.\\n2. The paper is generally understandable and clearly explains the technical parts to a certain extent.\\n3. The figures and charts in the manuscript are exceptionally clear and well-presented.\", \"weaknesses\": \"1. This paper does not provide sufficient details on the RAG WITH LLM.\\n2. The paper misses many related studies such as [1][2][3][4], which could provide a broader context and highlight the novelty and contribution of GNN-RGA more effectively.\\n[1] ARL: An adaptive reinforcement learning framework for complex question answering over knowledge base. Inf. Process. Manag. 59(3): 102933 (2022)\\n[2] Query Path Generation via Bidirectional Reasoning for Multihop Question Answering From Knowledge Bases. IEEE Trans. Cogn. Dev. Syst. 15(3): 1183-1195 (2023)\\n[3] Question-Directed Reasoning With Relation-Aware Graph Attention Network for Complex Question Answering Over Knowledge Graph. IEEE ACM Trans. Audio Speech Lang. Process. 32: 1915-1927 (2024)\", \"questions\": \"1. This paper does not provide sufficient details on the RAG WITH LLM.\\n2. The paper misses many related studies such as [1][2][3][4], which could provide a broader context and highlight the novelty and contribution of GNN-RGA more effectively.\\n[1] ARL: An adaptive reinforcement learning framework for complex question answering over knowledge base. Inf. Process. Manag. 59(3): 102933 (2022)\\n[2] Query Path Generation via Bidirectional Reasoning for Multihop Question Answering From Knowledge Bases. IEEE Trans. Cogn. Dev. Syst. 15(3): 1183-1195 (2023)\\n[3] Question-Directed Reasoning With Relation-Aware Graph Attention Network for Complex Question Answering Over Knowledge Graph. IEEE ACM Trans. Audio Speech Lang. Process. 32: 1915-1927 (2024)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents GNN-RAG, a framework aimed at improving retrieval-augmented generation for KGQA. GNN-RAG leverages Graph Neural Networks for effective graph-based reasoning and retrieval, supplying candidate reasoning paths as context to the language model. Experimental results show that GNN-RAG, with a tuned 7B LLM, outperforms GPT-4 in performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is clearly presented and easy to follow.\\n2. The GNN-RAG approach integrates seamlessly with various LLMs.\\n3. The proposed method shows notable improvements in the KBQA task.\", \"weaknesses\": \"1. The contribution of the paper feels limited; the approach mainly leverages LLMs to retrieve the correct answers from REAREV, without significant novelty in methodology.\\n2. Experiments are conducted on only three datasets, with numerous competitor results missing, which limits comprehensive evaluation.\", \"questions\": \"1. Why are many results missing in Table 2? In addition, expanding the experiments to include more datasets would help demonstrate the generalizability of the proposed method.\\n2. The model involves several hyperparameters (e.g., entity selection threshold and m), yet sensitivity analysis is absent. Including such analysis would improve the robustness of the evaluation.\\n3. According to the original REAREV method, H@1 on the MetaQA-3 dataset achieves 98.9, whereas performance with the LLM (i.e., GNN+RAG) in Table 3 decreases to 98.6. Could the authors clarify in which cases the integration with the LLM might lead to a performance drop?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a GNN-RAG framework that combines GNN and LLM for KGQA task. GNN-RAG first leverages SOTA GNN model to retrieve answer candidates for a given question, then the shortest paths that connect question entities and answer candidates are extracted to represent KG reasoning paths for LLM\\u2019s final output. Experiments demonstrate that the GNN-RAG with fine-tuned LLaMA2-7B can achieve state-of-the-art performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper propose to leverage GNN as a retriever to improve RAG in complex KGQA.\\n2. GNN-RAG improves LLMs without incurring additional LLM calls, and achieves competitive performance.\\n3. The authors provide codes for their implementation.\", \"weaknesses\": \"1. The overall framework lacks novelty and is just like a simple concatenation of existing methods.\\n\\n2. The role of GNN as a retriever is only to identify candidate entities, rather than to \\\"learn useful graph information\\\" as stated by the author.\\n\\n3. There are a large number of blank spaces in the experimental result table.\", \"questions\": \"1. In GNN-RAG framework, the role of GNN is merely to obtain high-scoring candidate entities. Can the GNN be replaced by any other model, such as a simple but effective embedding-based model? What is the necessity of having a GNN in this framework?\\n\\n2. When there is more than one shortest path between the candidate entity and the question entity, how should the retrieval result be determined? How can you ensure that the shortest path accurately aligns with the semantic information of the question?\\n\\n3. Is the setting of 'RA' in the method equivalent to performing an ensemble between GNN-RAG and RoG?\\n\\n4. Many LLM-based methods typically report their results using the Hits@1 metric, but you place their results under the Hit metric. By doing so, larger performance improvements are observed. Is there sufficient evidence to suggest that this comparison is appropriate?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"Summary: The paper proposes GNN-RAG, a framework combining Graph Neural Networks (GNNs) and Retrieval-Augmented Generation (RAG) for Knowledge Graph Question Answering (KGQA). The framework uses GNNs for retrieval of graph-based reasoning paths, which are then provided as context to downstream Large Language Models (LLMs). Experimental studies are conducted on two KGQA benchmarks (WebQSP and CWQ), outperforming or matching GPT-4 performance using a smaller LLM (7B parameters) and retrieval.\", \"strengths\": [\"Clear presentation with thorough explanations and illustrative examples\", \"Detailed ablation studies and analysis of different components\"], \"weakness\": [\"Lukewarm response from all but one reviewer and the positive reviewer didn't champion the paper\", \"Limited Novelty: Several reviewers noted that the approach primarily combines existing GNN and RAG techniques, raising concerns about methodological originality.\", \"Role of GNNs: The GNN primarily retrieves high-scoring candidate entities rather than performing more complex graph reasoning, which some reviewers found underwhelming.\", \"Missing important baseline: Apart from the works pointed out by reviewer, there are several missing important works, e.g. Das et al \\\"Case-Based Reasoning for Natural Language Queries over Knowledge Bases\\\", 2021. He et al. \\\"Improving Multi-hop Knowledge Base Question Answering by Learning Intermediate Supervision Signals\\\", 2022, and inter alia.\", \"Incomplete empirical comparisons and stronger than substantiated claims: In light of other missing works, it is not clear if GNN-RAG achieves SoTA results\"], \"decision\": \"Given the lack of enthusiasm from the reviewers, incremental novelty, and missing prior works, unfortunately, the paper can't be accepted in its current form and addressing all the concerns would warrant another round of reviewing.\", \"additional_comments_on_reviewer_discussion\": \"We thank the authors for engaging during the discussion phase towards improving the paper. Below are some of the highlights:\\n\\n1. Novelty concerns: Multiple reviewers questioned the technical innovation. The authors argued that showing GNNs remain effective complementary components to LLMs is an important contribution, though some reviewers remained unconvinced.\\n2. Missing implementation details: Reviewers requested more details about GNN architecture, embeddings, etc. The authors provided comprehensive additional technical details in their response and updated appendix.\\n3. Baseline comparisons: Reviewers noted some missing baseline comparisons. The authors clarified that some baselines (e.g., ToG) had reproducibility issues, but agreed to add additional comparisons where possible.\\n4. Training costs/efficiency: Reviewers requested more discussion of computational requirements. The authors added details about training times and resource usage.\\n\\nThe authors were responsive and provided detailed clarifications and additional results. However, some of the main concerns, e.g. novelty, weren't fully resolved.\"}" ] }
EVg9lwHFJs
Fine-Grained Emotion Recognition with In-Context Learning: A Prototype Theory Approach
[ "Zhaochun Ren", "Zhou Yang", "Chenglong Ye", "Wang Yufeng", "Haizhou Sun", "Chao Chen", "Xiaofei Zhu", "Wu Yunbing", "Xiangwen Liao" ]
In-context learning (ICL) achieves remarkable performance in various domains such as knowledge acquisition, commonsense reasoning, and semantic understanding. However, its effectiveness deteriorates significantly in emotion detection tasks, particularly in fine-grained emotion recognition. The reasons behind this decline still remain unclear. In this paper, we explore the underlying reasons of ICL's suboptimal performance through the lens of prototype theory. Our investigation reveals that ICL aligns with the principles of prototype theory when applied to fine-grained emotion recognition tasks. According to prototype theory, effective emotion recognition requires: Referencing well-represented emotional prototypes that are similar to the query emotions, and making predictions based on the closest emotional similarity. Building on this insight, ICL has three main shortcomings: (1) It uses oversimplified single-emotion labels for prototypes, leading to inaccurate emotion representation. (2) It references semantically similar but emotionally distant prototypes. (3) It considers all emotion categories as candidates, leading to interference from irrelevant emotions and inaccurate predictions. To address these shortcomings, we propose an Emotion Context Learning method (E-ICL) for fine-grained emotion recognition. E-ICL first employs a dynamic soft-label strategy to create multi-dimensional emotional labels for accurate prototype representation. It then selects emotionally similar prototypes as references for emotion prediction. Finally, it uses an emotion exclusion strategy to eliminate interference from dissimilar emotions by selecting similar emotions as candidates, resulting in more robust and accurate predictions. Note that our approach is implemented with the aid of a plug-and-play emotion auxiliary model, requiring no additional training. Extensive experiments conducted on fine-grained emotion datasets—EDOS, Empathetic-Dialogues, EmpatheticIntent, and GoEmotions—demonstrate that E-ICL significantly outperforms existing methods in emotion prediction performance. Moreover, even when the emotion auxiliary model accounts for less than 10\% of the LLMs' capacity, E-ICL consistently boosts LLM performance by over 4\% across multiple datasets.
[ "fine-grained emotion recognition", "in-context learning", "ICL", "large language model", "LLMs" ]
https://openreview.net/pdf?id=EVg9lwHFJs
https://openreview.net/forum?id=EVg9lwHFJs
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yFb0t2aIR9", "X3elXMNpov", "VlmC3fgfAd", "QwOIpRZId9" ], "note_type": [ "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730783992676, 1730653080310, 1737103458092, 1730552330588 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission98/Reviewer_uvLi" ], [ "ICLR.cc/2025/Conference/Submission98/Reviewer_T93f" ], [ "ICLR.cc/2025/Conference/Submission98/Authors" ], [ "ICLR.cc/2025/Conference/Submission98/Reviewer_LwuJ" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces Emotion In Context Learning (E-ICL), a method aimed to improve the performance of few-shot LLMs on the task of fine-grained emotion detection. E-ICL proposes various improvements to traditional in-context learning methods: first, it leverages the interpretable nature emotion by recognizing that an example can express various emotion and various degrees. To this end, it proposes a soft-labeling strategy to assign multiple emotion categories to few-shot examples. Second the approach leverages embeddings of specialized emotion detection model to pick up \\u201cemotionally\\u201d similar demonstrations instead of relying only on similarity. Finally, E-ICL also incorporates an exclusion mechanism to focus only on demonstrations that are most likely to express the same emotion as the test example. The approach is tested on four fine-grained emotion detection datasets using two LLMs where it attains considerable performance improvements.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"The idea behind the paper is interesting. Using soft labels for prototypes seems promising and very fitting to emotion detection, an inherently ambiguous task.\", \"E-ICL considerably outperforms the traditional ICL.\", \"The paper carries out a lot of analyses to measure the effect of method hyperparameters: k1, k2, k3, alpha, as well as ablation study on various components of the method.\"], \"weaknesses\": [\"While the paper has some merit and the method is interesting, I believe there are significant flaws in both presentation and significance.\", \"The presentation of the method (i.e., Section 4) needs significant improvements. Notation is inconsistent and unintuitive, with various errors. I found it very hard to understand this section. For example, in equation 5 we subtract a vector from a scalar; there\\u2019s a sum over k1, but j is not properly defined; these probability distributions seem not to be probability distributions, i.e., they are not normalized. In L246 m_{i} \\\\in n_{d} but in the very next sentence n_{d} is a scalar. There\\u2019s a notation of both p^{s}_{m_{i}} and p^{s}_{i}. Which one is it? In L205, V is presented as an emotion vector and it\\u2019s mentioned that V \\\\in R^{768}. What is this 768? The reader may not be familiar with all embedding sizes for different types of models.\", \"The experimental setup is limited: only Claude-haiku and ChatGPT-turbo are not enough baselines to offer confidence that the method is widely applicable. Additionally, there are no baselines shown besides the RoBERTa.\", \"Most importantly, it is unclear to me why the RoBERTa model trained in-domain is not shown. In the method proposed here, the model has access to the training set and utilizes the GT (Eq 5). Therefore it\\u2019s critical to show this comparison where you train RoBERTa on the training data. Unfortunately, as shown in the papers introducing the datasets considered here, it seems that traditional language models such as BERT outperform E-ICL. Additionally E-ICL has significant inference-time costs that are not discussed here.\", \"The method is cumbersome: parameters such as k1, k2, k3, \\\\alpha need to be tuned.\"], \"missing_citations\": \"Suresh et al., 2021 Not All Negatives are Equal: Label-Aware Contrastive Loss for Fine-grained Text Classification\", \"questions\": [\"Have you considered a variable number of demonstrations based on the context window (i.e., maxing out the input context length)? This could maybe remove the need for k2.\", \"Have you done a comparison of the inference costs ICL vs. E-ICL vs. RoBERTa.\", \"The prompt designs are not shown. How do the prompts look like?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper named \\u201cFine-Grained Emotion Recognition with In-Context Learning: A Prototype Theory Approach\\u201d investigates the limitations of In-Context Learning (ICL) in fine-grained emotion recognition tasks. Using prototype theory, the authors identify key challenges: ICL\\u2019s reliance on single-emotion labels, selection of semantically rather than emotionally similar prototypes, and consideration of all emotion categories, which results in poor performance. To address these issues, they introduce Emotional In-Context Learning (E-ICL), a method that enhances prototype construction, selection, and prediction by incorporating a soft-labeling approach and exclusionary emotion prediction.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. E-ICL effectively addresses ICL\\u2019s emotional recognition limitations by incorporating dynamic soft labels and emotionally similar prototype selection.\\n\\n2. The approach shows a significant improvement in emotion recognition without additional model training, making it efficient and accessible.\", \"weaknesses\": \"1. There are some typos in the paper.\\n2. The paper primarily compares against ICL and zero-shot methods, which might not fully showcase E-ICL\\u2019s comparative strengths. There should be some methods with improved ICL prompts for comparison. It is not fair to directly compare zero-shot prompt with well-designed prompts.\\n3. The authors should include more LLMs to verify the effectiveness of the proposed E-ICL methods.\", \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"There are three main disadvantages according to In-context learning (ICL), including using a single emotion label leading to inaccurate emotion representation, referencing semantically similar but emotionally distant prototypes, and treating all emotion categories as candidates leading to interference and inaccurate predictions. This paper proposes an emotional context learning method (E-ICL) for fine-grained emotion recognition. Extensive experiments on fine-grained emotion datasets (EDOS, Empathetic-Dialogues, EmpatheticIntent, and GoEmotions) show that E-ICL significantly outperforms existing methods in emotion prediction performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"N/A\", \"weaknesses\": \"1. In the description of Figure 1 (lines 88-90), the author does not introduce in detail which method is used to select the emotional prototype closest to the query. In addition, in lines 81-82, the author did not specifically explain why the existing methods would query irrelevant prototypes. It is recommended to add additional explanations to better reflect the limitations of the existing methods and the motivation of this article.\\n2. Some of the illustrations in Figure 1 are slightly rough. For example, the fonts of subtitles (a) and (b) are too large and inconsistent with other fonts.\\n3. In the selection of baseline models, although some common models and methods are selected as comparisons, there may be some other advanced emotional recognition methods that have not been compared. For example, some of the latest large language models and some fine-tuning methods are mentioned in the following literature.\\n[1] Liu Z, Yang K, Xie Q, et al. Emollms: A series of emotional large language models and annotation tools for comprehensive affective analysis[C]//Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024: 5487-5496.\\n4. The authors need to give the recognition accuracy and F1 value of each method for each emotion category on the dataset to further illustrate the performance of the method. I need to clearly understand in which categories the performance of the proposed method is improved.\\n5. The paper simply explains the connection between ICL and prototype theory (lines 90-92) and proposes an improved method based on this. The theoretical support for its connection needs to be explained in the method or appendix.\", \"questions\": \"The important points listed in weakness 1-5.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
EVa5OIYBoG
Expanding the Web, Smaller Is Better: A Comprehensive Study in Post-training
[ "Zixuan Ke", "Yifei Ming", "Xuan-Phi Nguyen", "Caiming Xiong", "Shafiq Joty" ]
General-purpose large language models (GLLMs) like GPT-4 and LLaMA have demonstrated exceptional performance across a wide range of tasks. However, their performance often falls short in domain- or task-specific applications, where deeper, specialized knowledge is essential, while maintaining general knowledge remains crucial for handling broader, unseen tasks. Post-training has been widely applied to make LLMs specialized, typically consisting of multiple stages, including DomainAdaptive Pre-Training (DAPT) and Supervised Fine-Tuning (SFT). In this work, we conduct a comprehensive study on three key aspects of post-training taking Finance as a target domain: (1) the distinct roles of DAPT and SFT in post-training, (2) strategies to mitigate knowledge forgetting across stages, and (3) evaluation methods that capture both general and domain-specific capabilities. Our results show that DAPT and SFT require distinct training objectives, joint training of DAPT and SFT is essential for maintaining stage knowledge and encouraging knowledge transfer across stages, and replay mechanisms are critical for preventing forgetting. Evaluation should encompass general, seen, and unseen tasks for a complete assessment. Based on these insights, we developed a Joint-and-Replay post-training recipe and built LLaMA3-8B-Fin, a smaller yet more powerful stateof-the-art financial LLM trained through post-training. Despite its smaller size, LLaMA3-8B-Fin surpasses larger models like GPT-4o and LLaMA3.1-70b on both seen and unseen financial tasks while retaining general knowledge, demonstrating that a well-structured post-training can “expand the web” of capabilities in smaller LLMs, enabling them to outperform much larger models.
[ "Post-training", "Continual Learning", "Large Language Models" ]
https://openreview.net/pdf?id=EVa5OIYBoG
https://openreview.net/forum?id=EVa5OIYBoG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vjjOF4q9XO", "YkQpTSPvCG", "Djz2SpZXwy", "A5UznBgc7u" ], "note_type": [ "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730037853323, 1730693691584, 1730642705933, 1733185847740 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11414/Reviewer_TaPq" ], [ "ICLR.cc/2025/Conference/Submission11414/Reviewer_Wpsy" ], [ "ICLR.cc/2025/Conference/Submission11414/Reviewer_MAEb" ], [ "ICLR.cc/2025/Conference/Submission11414/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper mainly analyzes and discusses three significant issues within the post-training phase, including analysis the primary functions of DAPT and SFT, methods to alleviate catastrophic forgetting in the continuous learning process of LLMs, and evaluation of LLMs in both general and specific domains. This work answers the above three questions through experiments. Additionally, they propose a Joint-and-Replay training method to address the problem of catastrophic forgetting.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper discusses training methods for the application of LLMs in a specific domain and highlights three key questions that have great practical value in the post-training phase.\\n2. This paper is well-organized, featuring concise and clear sentences that facilitate a clear understanding of the core ideas. Furthermore, the figures and tables are well-crafted, effectively presenting the results of the experiments.\", \"weaknesses\": \"1. The training techniques mentioned in the paper, such as masking the content of instructions in SFT and using a replay strategy to mitigate forgetting [1][2][3], are commonly employed training skills in the LLMs field. Even the proposed Joint-and-Replay training method in this paper is also a commonly used training skill, lacking significant distinctions or standout features compared to existing methods. Perhaps the author should highlight the differences between the proposed methods and existing works in the paper.\\n\\n[1]Llama 2: Open Foundation and Fine-Tuned Chat Models\\n\\n[2] Qwen2 Technical Report\\n\\n[3]Fine-tuned Language Models are Continual Learners\\n\\n2. While the author raises three crucial issues within this domain, the core conclusions drawn from these three questions do not present any remarkable insights. The first two questions have been extensively explored in previous literature[4][5][6][7]. Furthermore, the evaluation methods discussed in the post-training stage still adhere to standard procedures without introducing novel evaluation approaches. After further post-training to enhance the model's capabilities in specific domains, can a new evaluation method be introduced to dynamically evaluate the model's performance in specific tasks and general domain knowledge? I suggest the authors explore other innovative evaluation methods from these perspectives in the paper.\\n\\n[4] LIMA: Less Is More for Alignment\\n\\n[5] Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?\\n\\n[6] Analyzing the Forgetting Problem in Pretrain-Finetuning of Open-Domain Dialogue Response Models\\n\\n[7] Simple and Scalable Strategies to Continually Pre-train Large Language Models \\n\\n3. The authors should focus on **introducing their proposed methods in the Method section**, rather than extensively on basic training techniques like SFT and pre-training in section 3.1.\", \"questions\": \"**Joint Training Details:** The paper mentions joint training of DAPT and SFT. How did you set the training ratio? Did you experiment with different ratios, such as an even split between DAPT and SFT, or a higher proportion for DAPT? Could you provide more details on the joint training process, such as training time, learning rate, etc.?\\n\\n**Evaluation of Unseen Tasks:** The paper mentions that unseen tasks are primarily from the financial domain. How did you define unseen tasks? Did you attempt using data from other domains as unseen tasks? Could you provide more details on the evaluation of unseen tasks, such as evaluation metrics, task types, etc.?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper aims to focus on post-training for LLMs, specifically on the financial domain. It investigates Domain-Adaptive Pre-Training (DAPT) and Supervised Fine-Tuning (SFT) in building a finance-specific model, LLaMA3-8B-Fin, using a proposed \\\"Joint-and-Replay\\\" approach. This methodology aims to enhance domain-specific knowledge retention while maintaining general capabilities. The evaluation includes both general benchmarks and finance-specific tasks, with findings suggesting that LLaMA3-8B-Fin achieves competitive performance on finance tasks compared to larger models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. Focused application on finance. The paper\\u2019s focus on developing a LLM for finance and highlights practical insights for domain-specific adaptation.\\n2. The Joint-and-Replay approach is straightforward and could offer a useful recipe for practitioners aiming to balance domain-specific and general knowledge in smaller LLMs.\\n3. Experiments were conducted comprehensively on sufficient number of existing evaluation datasets.\", \"weaknesses\": \"1. Insufficient support for broad claims on post-training.\\n - The paper asserts a broad investigation into the entire post-training stage for LLMs but does not sufficiently review or acknowledge recent advancements in this area. Particularly, the claim that \\\"most approaches merely involve additional pre-training on specialized data or rely on the traditional LLM framework where a single pre-training stage is followed by task-specific fine-tuning via classifiers\\\" (lines 77\\u201383) fails to account for significant contributions in post-training, including RLHF which is a very important stage in post-training. \\n - Extending from SFT, research in post-training has also focused on self-training methods like RFT [1], STaR [2-3], ReST^EM [4] and self-reward [5]. There have also been works that aim to unify SFT with RLHF [6] and numerous works studying SFT/DPO/PPO. The technical reports of Llama-3, for example, dedicated substantial amount of pages to discuss their post-training techniques. The paper\\u2019s claim of a comprehensive study does not seem well-supported to me.\\n - The claim of scope for this paper should focus on LLM for finance instead of post-training.\\n\\n2. Unclear significance in general benchmark evaluation. The general benchmark results show that LLaMA3-8B-Fin does not degrade compared to LLaMA3-8B-Instruct but underperforms other LLMs. Since domain-specific models like Palmyra-Fin-32K also maintain performance on standard benchmarks, this result lacks a clear takeaway.\\n\\n3. The results in Table 8, showing full fine-tuning outperforming LoRA, are expected. It\\u2019s widely understood that full fine-tuning generally yields better results when computational resources are sufficient. This section does not add novelty, as many models (e.g., LLaMA) already favor full fine-tuning for optimal performance.\\n\\n4. Section 7 on evaluation lacks significance in contribution, given the extensive existing work on LLM evaluation.\\n\\n\\n[1] Scaling relationship on learning mathematical reasoning with large language models\\n\\n[2] STaR: Self-Taught Reasoner\\n\\n[3] V-star: Training verifiers for self-taught reasoners\\n\\n[4] Beyond human data: Scaling self-training for problem-solving with language models\\n\\n[5] Self-rewarding language models\\n\\n[6] Intuitive Fine-Tuning: Towards Unifying SFT and RLHF into a Single Process\", \"questions\": \"See above in weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explored the post-training of language models to adapt them for domain-specific tasks. In particular, the paper discusses Domain-Adaptive Pre-Training (DAPT), and Supervised Finetuning (SFT) for pretrained and chat-tuned models (LLaMA3 8B). The authors evaluated the proposed post-training scheme on the financial domain, and showed that their model can outperform a SoTA finance-specific language model on a certain set of tasks. The authors also showed that joint DAPT and SFT where the model is trained jointly on text-token prediction on raw text and instruction following provide the best performance, and results in the least amount of reduction in the general capabilities of the pretrained model (i.e., the least forgetting).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper considered the important problem of post-training of language models to adapt them for domain-specific tasks\", \"The paper presented different design choices, and proposed the use of joint training with replay to achieve best performance\", \"The approach was applied to obtain a state-of-the-art language model for the financial domain\"], \"weaknesses\": [\"The selection of different datasets seemed arbitrary and confusing. Aqua, math, GSM-8k, and other evaluation datasets were included during pretraining, which defeats the purpose of evaluation. I would have preferred to use generic pretraining datasets such as Wikitext/C4/PILE/FineWeb etc.\", \"Any description of datasets, and the associated choices were missing from the paper, making it hard to understand the reason for those choices. This made it hard to understand the significance and the reliability of the obtained results. The authors only mentioned filtering of URLs from FineWeb without any further details. The inclusion of details about each of the datasets and the reasons for their inclusion is important to be specified.\", \"The paper just explored SFT, without considering distillation as a potential remedy for forgetting. I would be keen to understand the impact of distillation, or even just generation of the target documents from the model.\", \"The reliability of the comparison with the other SoTA model (Palmyra) is unclear. No description of what that model is trained on is provided. It would be important to elaborate on the comparison.\"], \"questions\": [\"Was the answer for the dataset such as GSM-8k also included during DAPT (table 2a)?\", \"What kind of documents from FineWeb were filtered? Mentioning that they were selected based on URLs is very vague\", \"Why did the authors decide to include evaluation datasets instead of regular pretraining datasets such as PILE/C4/FineWeb?\", \"Why was the comparison against the other SoTA financial model (Palmyra) fair? Was the model trained and exposed to the same datasets that the authors used for finetuning?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
EVZnnhtMNX
Scalable Preference Learning for Large Language Models via Convex Optimization
[ "Miria Feng", "Mert Pilanci" ]
Fine-tuning large language models (LLMs) for alignment with human preferences have become a key factor in the success of models like ChatGPT and Gemini, which are now integral to mainstream use. Many effective techniques are based on Reinforcement Learning from Human Feedback (RLHF), yet are challenging and expensive to implement. Direct Preference Optimization (DPO) offers an accessible alternative by simplifying the objective, but can exhibit random ranking accuracy and requires a frozen reference model. In this paper, we develop a fast and an even more lightweight DPO based algorithm --- \emph{CVX-DPO} ---- that operates on a single GPU. The key to achieving this is leveraging the convex optimization reformulation of neural networks, which eliminates the dependence on copying the reference model and is robust against hyperparameter tuning. CVX-DPO can be trained to global optimality in polynomial time. We use the Alternating Direction Method of Multipliers (ADMM) to solve this optimization problem in order to increase parallelization efficiency, and implement our methods in JAX to lift the memory constraints across experiments. We experiment on three datasets, including one synthetically generated educational dataset, to demonstrate the efficacy of our novel algorithm in a real world setting. CVX-DPO outperforms traditional DPO in user preference generation when tested on human subjects, despite being trained on one single RTX-4090 GPU.
[ "large language models", "preference learning", "convex optimization" ]
https://openreview.net/pdf?id=EVZnnhtMNX
https://openreview.net/forum?id=EVZnnhtMNX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uTCUW07cTo", "rV21KGHnXd", "huVmsHaBxH", "WfCUcVpkga", "Ty99mtkKMN", "Tn7ahSG1d9", "Q755fGX4fQ", "Mtja3QNMD0", "DUJ9QvhLMg", "5UXnKf5wis", "4ON8HG5bsh" ], "note_type": [ "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "comment", "official_review" ], "note_created": [ 1731102870648, 1730396759557, 1733312391630, 1733313587744, 1733313403672, 1733313462967, 1730541590250, 1730659060738, 1733312836486, 1736724307406, 1730696881379 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13374/Reviewer_FPZ4" ], [ "ICLR.cc/2025/Conference/Submission13374/Reviewer_GGkp" ], [ "ICLR.cc/2025/Conference/Submission13374/Authors" ], [ "ICLR.cc/2025/Conference/Submission13374/Authors" ], [ "ICLR.cc/2025/Conference/Submission13374/Authors" ], [ "ICLR.cc/2025/Conference/Submission13374/Authors" ], [ "ICLR.cc/2025/Conference/Submission13374/Reviewer_yuu8" ], [ "ICLR.cc/2025/Conference/Submission13374/Reviewer_DuL5" ], [ "ICLR.cc/2025/Conference/Submission13374/Authors" ], [ "ICLR.cc/2025/Conference/Submission13374/Authors" ], [ "ICLR.cc/2025/Conference/Submission13374/Reviewer_aHcb" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a lightweight version of DPO that is supposed to require less resources.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"It is desirable to make RLHF use less resources. Though note that this paper does not make DPO use less resources as was claimed in the abstract.\", \"weaknesses\": \"1. The introduction contains sweeping generalizations that are incorrect and not appropriate for an academic paper. For example, it is possible to perform RLHF on off-policy data. And it is hard to say \\\"This model is able to infer what a human user wants and output a realistic answer that a human might like.\\\" This is not an accurate description of preference learning, and there are many documented issues with DPO and RLHF. Much of the writing throughout the paper is too informal and loose -- claims should be made more precisely and odd, extra text should be omitted. For example, much of the \\\"JAX for Speed and Memory\\\" section is written oddly (eg \\\"our past work has found it to be extremely performant\\\" with no citation, and \\\"Recent research in review will provide more in depth discussion\\\")\\n\\n2. The emphasis on a 4090 GPU is a bit artificial -- a lot of the memory gain comes from tuning a smaller NN, not necessarily from the method. The authors should amend the introduction to mention the model size. This issue is even worse in Section 3, where the authors claim that their method circumvents the need for FSDP. Indeed, this is probably only the case because the model is smaller.\\n\\n3. The experiments and results sections are clearly rushed. First of all, a win rate of \\\"4\\\" does not make any sense (Table 1). And speed and efficiency gains are claimed without any reported evidence. The evaluation in Section 4.4 does not match any standard notion of evaluation (yes, evaluation of generative models is hard, but if you want to say your method is better than another, you need to use some standardized evaluations). These are just some of the issues -- I am not listing them all.\", \"questions\": \"DPO should not cost any additional memory when implemented properly. One can run and save the logits of the reference model before starting training, and then preference tuning the model is just the cost of normal training. I am confused where the efficiency gain comes from besides tuning a smaller model, and I don't see any reported evidence besides informal gains (\\\"roughly twice as fast\\\", etc).\\n\\n\\nI honestly did not understand the method. Where does the two layer network even show up? How does adding an additional network result in an efficiency gain? What is this convex reformulation? Note that my lack of understanding for the method doesn't undermine my review, which was mostly focused on pointing out issues in the evaluation and experiments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a method using a convex reformulation of NNs to perform lightweight DPO and introduces a strategy for generating preference datasets. They use a convex NN that classifies hidden features as chosen or rejected and uses the output with a DPO style loss. The dataset generation strategy uses a natural conversation between two models and sets the first part of the dialogue as the prompt, the second as the chosen responses, and the remaining parts as rejected.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper introduces a reference-free method for direct preference learning, and uses a convex-NN for more efficient training of a two layer NN block.\", \"weaknesses\": \"There are parts of the notation that are unclear such as in line 145, $X$ is said to be a two-layer ReLU MLP but has a matrix shape. Additionally, in equation (4a), F is defined in terms of $D_i X$ where $D_i$ had not been defined.\\n\\nThe method itself is also not clearly explained. They mention extracting hidden features as the policy model optimizes, but it is not described how/where these are extracted from. They also mention optimizing DPO's BCE style loss but it is not clear exactly how they define the loss given that the method should be reference-free. Furthermore, it is unclear why given the same reference-free loss, why the reward should be defined using a convex-NN instead of the original log-prob definition. Along with this, it is unclear how this method performs with respect to existing reference-free methods such as ORPO or SimPO. \\n\\nLastly, it is unclear as to the advantage of the dataset generation method. One strong issue is using most of the conversation as rejected responses which contrasts a direct response to the prompt with responses that would not appear in the same context. Additionally, the novelty of the method is unclear as datasets such as HH already draw multiple samples from the same conversation.\", \"questions\": \"1. Can you define each of the terms in equation 4 more clearly along with $D_i$ and $X$?\\n2. How does this method compare to existing reference-free methods?\\n3. What is the motivation for using the convex-NN to define reward?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Thank you for your valuable feedback! We realize the paper was initially unclear, and have explicitly updated the Algorithm with supporting theoretical proof.\", \"DPO is not the same as RLHF. It is a more efficient alternative to RLHF, which can only be implemented by organizations with many resources. Despite offering improvements, DPO can still be very expensive. In this paper, we propose a new algorithm, CVX-DPO, as an alternative to DPO. CVX-DPO requires fewer resources to run, leading to convex optimization problems that are easy to solve on modern hardware and don't require extensive optimizer tuning to achieve good results. We apologize this was unclear in the original submission, but we believe the newly uploaded submission makes our contributions clear.\", \"The introduction has been re-written to be more clear. The specific sentence you cited was quoted from one of the original authors of the seminal DPO paper during discussion about this work. We realize that although we respect the expertise of the original DPO authors, it is not professional to quote them verbatim in our introduction and have removed this. The paper has been rewritten to be more formal, and is no longer loose. We have deleted extra text.\", \"We respectfully disagree with the reviewer on this point. When we say we can train on one 4090, we are talking about our method CVX-DPO relative to the original DPO, which runs into OOM memory issues even on these smaller models. For example, while CVX-DPO required one 4090 to train on the IMDb dataset with the GPT2-Medium model, we needed to use a cluster of A100s to get DPO to run. CXV-DPO also requires significantly less TFLOPS as seen above.\", \"CVX-DPO's approach is model agnostic. The gains in memory do not come from using a smaller model but from the approach taken by CVX-DPO, which stacks a two-layer convex neural network on top of an existing pre-trained model. This results in an optimization problem that is much more computationally efficient to solve than the one in the original DPO, regardless of the size of the base pre-trained model.\", \"Again, our comments on the need for FDSP concern DPO. While FDSP is needed for DPO to run on datasets such as IMDb, with CVX-DPO, we only needed one 4090.\", \"The new metrics have been listed above, including somewhat standard evaluations, such as GPT4 judge feedback which was used in the original DPO paper. The choice of 25 human samples was also implemented to match the seminal DPO experiments.\", \"We do agree that for the largest models, sharding across multiple devices will be greatly advantages. Especially since our JAX codebase readily admits to highly efficient sharding in future work. In summary, CVX-DPO will still require fewer resources than DPO due to the nature of its formulation. Thank you for your feedback!\"]}", "{\"comment\": \"Thank your feedback! We have thoroughly re-written to paper to address your questions and concerns. The initial typo of 17 persons was due to some discussion about the opinions of our human volunteers to being named in the work. We have resolved this problem in the revision.\\n\\n* With regards to point 1, we have added Section 3 to give background, mathematical definitions, and clearly derive our objective. This also resolves points 1 and 2 of your questions. We have also added convergence guarantees for our novel algorithm to global optimality in polynomial time. \\n* Our JAX codebase was provided in the zipped upload during submission. We have now also added an anonymous github repo for your convenience. Please see the link above in general comments. \\n* Our goal is to push the boundaries of what can be achieved on one GPU at this time. The inception of this project arose from the realization that DPO and its variants ran into OOM issues on one RTX-4090, on very small models and datasets. Additionally extensive formatting of the preference dataset into chat template form is required, yet even after extension training results are often unstable. This is discussed in greater detail in the recent work of [1] and [2]. By optimizing on one GPU in JAX, we are able to lift the memory and resource constraints with the goal to democratize preference learning. The advantage of our JAX codebase is that it allows highly efficient multi-GPU sharding for large models in future work. \\n* Table 1 is presented as the win rate of the preference tuned model output to the same prompt. We will make this more clear as a graph in the revision. We have also added GPT4 judge, TFLOPS, and time measurements as more detailed metrics. Please see the post above. \\n\\nThank you and we look forward to your comments.\\n\\n**References**\\n\\n[1] Meng, Y., Xia, M. and Chen, D., Simpo: Simple preference optimization with a reference-free reward NeurIPS, 2024.\\n\\n[2] Angelica Chen, Sadhika Malladi, Lily H Zhang, Xinyi Chen, Qiuyi Zhang, Rajesh Ranganath, and Kyunghyun Cho., Preference learning algorithms do not learn preference rankings. NeurIPS, 2024.\"}", "{\"comment\": \"Thank you very much for your time and feedback! We have included citations to the works noted in the revision, and will discuss thoroughly.\\n\\nWe have also added Section 3 which provides theoretical convergence guarantees for our work. Our benchmarks are replicated from the seminal work of the DPO paper, which also used 3 datasets on GPT4 judge and 25 human evaluators. We look forward to further discussions!\"}", "{\"comment\": \"While we politely disagree with your point, we have addressed them in the general posted comments above, thank you for your feedback!\"}", "{\"summary\": \"This paper introduces a combination of\\n- convex formulation of MLP\\n- alternating direction method of multipliers\\nas an alternative to gradient descent to solve the DPO alignment.\\nThe authors claim that with the ADMM formulation, the parallelization efficiency can be increased. They implement the method in JAX and is able to train on a single RTX-4090 GPU.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The presentation of this paper is difficult to understand, as such, I don't have enough understanding to comment on the strengths.\", \"weaknesses\": [\"There are many obvious reasons to recommend a rejection to this paper, here I list a few.\", \"This major contribution claimed by this work is the convex optimization & ADMM algorithm to perform DPO. But there is no information provided on which part of the DPO requires a MLP that is convexified, and which loss is solved in ADMM instead of SGD. The concepts of convex-nn (1)(2), admm (4) and DPO (5) are introduced as disconnected pieces, there is no explanation on what each symbol means, e.g. F & G in (4), what do they represent in the context of DPO etc. Unfortunately I could not understand what this method is doing given the current presentation.\", \"I find it difficult to understand the convex-nn part without referring to other papers, I would suggest reduce the text introducing JAX to save some space for a preliminary introduction of convex-nn, as it seems more relevant to the main method.\", \"The experimental report only human evaluations, which is not the standard in relevant literature, which often use LLMs as evaluators. Human evaluations could have strong subjection variance especially in a small number (17).\", \"While this work is motivated from the robustness of training, the speed, and the capability to run on a single GPU. The experiments report win rate over DPO. I'm not convinced why this method would out-perform DPO if the main difference is optimization algorithm.\"], \"questions\": \"IMO this paper needs a major revision in the writing. I can not ask meaningful question with the current level of understanding.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a lightweight alternative to RLHF in large language models with convex optimization. It reformulates the DPO objective using convex neural networks and leverages ADMM, achieving efficient training on a single GPU. Also, the authors implement the method in JAX, which improves the memory efficiency. Empirically, three datasets were explored by comparing DPO and DPO-convex.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"DPO-convex is a less computationally intense alternative compared to DPO, thus lowering the hardware requirements for preference learning and making RLHF more accessible.\", \"weaknesses\": \"1.\\tThe writing of this paper is not clear and the presentation can be significantly improved. In particular, it is not clear what the objective is for DPO-convex. How is a convex-nn constructed to formulate this objective? Please state the method and the setting in a more formal tone, and provide a detailed introduction on the methods or techniques used. For instance, how the convex-NN is constructed and integrated with the DPO objective?\\n\\n2.\\tThe authors mention implementing their method in JAX for improved memory efficiency, but no code is provided. Also, there are no quantitative comparisons on the memory usage of these two methods. Please provide a link for the code repository and include specific memory usage metrics for both methods in the results section.\\n\\n3.\\tThe models fine-tuned in the experiments are relatively small (DistilGPT2, GPT-2, GPT-2-medium). The scalability of this method to SOTA large models is uncertain. It would be great if the authors can conduct experiments on these models like Mistral or Llama 3 and more complicated tasks like MT bench or Alpaca-Eval?\\n\\n4.\\tA minor issue is the inconsistency in the reported number of volunteers, which is stated as 17 in some instances and 25 in others. Please see lines 053 and 069.\", \"questions\": \"1.\\tWhat does \\u201cprox\\u201d mean in equation (4b)?\\n\\n2.\\tCould you explain what \\u201cFST\\u201d stands for in the paper?\\n\\n3.\\tThe results in Table 1 are somewhat unclear. Could you elaborate on what a win rate of 1, 3, or 4 represents?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your feedback! Details about the notation has been fixed, and variables such as D and X are now defined in the convex reformulation Section 3 to give background.\\n\\nThe method is now more clearly explained. We re-wrote and reorganized the paper to include a clear section naming and explaining the CVX-DPO novel algorithm. There is now a background section explaining convex neural networks, convergence proofs, and the algorithm box. The objective is listed and compared against other objectives in the table above, and we have included the derivation and formulation of our objective function. We have added SimPO as a competing method now in our experiments. The motivation for the convex NN is that it gives polynomial time convergence guarantees and that it is more resource and time efficient to solve. Please see metrics regarding this above. \\n\\nThe dataset generation method is intended to simulate a real world dataset conversational situation. It offers diverse topics, and is varying turns of phrase that may occur in an educational setting. The preference generative sampling procedure is intended to relieve some of the dependence on formulating DPO training datasets into a chat template style, which is often sensitive and requires extensive training. We have added experiments against SimPO, and agree with the authors of SimPO that existing ref-free methods still require crucial hyperparmeter tuning to succeed [1]. In contrast, CVX-DPO does not exhibit this weakness, and is faster while requiring less TFLOPs for increased efficiency. Our method also provides convergence guarantees in polynomial time, now listed in Section 3.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The authors have developed a lightweight variant of Direct Preference Optimization (DPO) for aligning large language models with human preferences. This is achieved by reformulating the DPO using a convex optimization reformulation of neural networks. Experimental results show the promise of the proposed approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Strengths of the paper include:\\n1. The paper addresses the scalability of DPO by offering a more efficient alternative\\n2. The use of convex optimization reformulation of neural networks as a surrogate for DPO loss appears to be novel.\\n3. The paper offers methods that could make DPO accessible to researchers without access to multiple GPU systems.\", \"weaknesses\": \"1. The proposed method is largely a combination of existing well-established approaches - convex function reformulation of neural networks, ADMM, JAX and hence the contributions are somewhat incremental.\\n2. There is little offered with respect to theoretical results on preference optimization\\n3. The experiments seem to be somewhat limited and do not include many benchmarks used by other researchers\", \"questions\": \"1. How do you expect your method to compare against SOTA DPO methods on other data sets such as the ones used in https://arxiv.org/abs/2407.13709, https://arxiv.org/pdf/2403.19159 and other related work?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
EVK0sQHVCd
How and how well do diffusion models improve adversarial robustness?
[ "Liu Yuezhang", "Xue-Xin Wei" ]
Recent findings suggest that diffusion models significantly enhance empirical adversarial robustness. While some intuitive explanations have been proposed, the precise mechanisms underlying these improvements remain unclear. In this work, we systematically investigate how and how well do diffusion models improve adversarial robustness. First, we observe that diffusion models intriguingly increase—rather than decrease—the $\ell_p$ distances to clean samples. This is the opposite of what was believed previously. Second, we find that the purified images are heavily influenced by the internal randomness of diffusion models. To properly evaluate the robustness of systems with inherent randomness, we introduce the concept of fuzzy adversarial robustness, and find that empirically a substantial fraction of adversarial examples are fuzzy in nature. Finally, by leveraging a hyperspherical cap model of adversarial regions, we show that diffusion models increase robustness by dramatically compressing the image space. Our findings provide novel insights into the mechanisms behind the robustness improvements of diffusion-model-based purification and offer guidance for the development of more efficient adversarial purification systems.
[ "diffusion models", "adversarial purification", "robustness" ]
Reject
https://openreview.net/pdf?id=EVK0sQHVCd
https://openreview.net/forum?id=EVK0sQHVCd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yOfOBKPyEw", "udk3rB5Axj", "ttclY173IS", "tJfcTdLgnB", "pAYYy7Khoz", "mF0rKYT6PD", "lcIjOSlQwR", "ibWv65G6Vy", "fnKbGihefH", "f4XpkdQHEY", "eu5s2PK7aW", "cI0G2Uz749", "Xi5UFwYNq7", "W6L8CzLnpO", "UJXurXgEzc", "T4Hm6BFLJK", "Q7GdETVxdY", "PrdAqkehqU", "Oxf1j4ioUt", "MvoDFYzXmr", "FmMFfwYn8x", "4119PZViYV", "173Gdpudkt" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730714322020, 1732334955638, 1733008611817, 1732733288357, 1733156674436, 1733043676641, 1732334874168, 1730698451109, 1730715566059, 1732432542179, 1732334379620, 1733288406437, 1737523727271, 1733288653295, 1732904473013, 1733009385092, 1733009920260, 1734668945102, 1730443439815, 1730706302809, 1732432513459, 1732733412224, 1732905134375 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5822/Reviewer_7QBf" ], [ "ICLR.cc/2025/Conference/Submission5822/Authors" ], [ "ICLR.cc/2025/Conference/Submission5822/Authors" ], [ "ICLR.cc/2025/Conference/Submission5822/Authors" ], [ "ICLR.cc/2025/Conference/Submission5822/Reviewer_7QBf" ], [ "ICLR.cc/2025/Conference/Submission5822/Reviewer_aGTV" ], [ "ICLR.cc/2025/Conference/Submission5822/Authors" ], [ "ICLR.cc/2025/Conference/Submission5822/Reviewer_cqB1" ], [ "ICLR.cc/2025/Conference/Submission5822/Reviewer_aGTV" ], [ "ICLR.cc/2025/Conference/Submission5822/Authors" ], [ "ICLR.cc/2025/Conference/Submission5822/Authors" ], [ "ICLR.cc/2025/Conference/Submission5822/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5822/Authors" ], [ "ICLR.cc/2025/Conference/Submission5822/Authors" ], [ "ICLR.cc/2025/Conference/Submission5822/Authors" ], [ "ICLR.cc/2025/Conference/Submission5822/Authors" ], [ "ICLR.cc/2025/Conference/Submission5822/Area_Chair_PaGw" ], [ "ICLR.cc/2025/Conference/Submission5822/Reviewer_iBpK" ], [ "ICLR.cc/2025/Conference/Submission5822/Reviewer_pJXW" ], [ "ICLR.cc/2025/Conference/Submission5822/Authors" ], [ "ICLR.cc/2025/Conference/Submission5822/Authors" ], [ "ICLR.cc/2025/Conference/Submission5822/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This is an analytical work to investigate how diffusion-based purification (DBP) improve the adversarial robustness. With pilot experiments, this work finds that diffusion models increase the lp distances to clean samples and the purified images are heavily influenced by the internal randomness of diffusion models. Furthermore, this work introduces the concept of fuzzy adversarial robustness and hyper-spherical cap model of adversarial regions, and gives an explanation on how DBP works.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tOverall, this paper is well-written, with clear organizations and illustrations.\\n2.\\tThe finds in this work are reasonable, with sufficient explanations and experimental supports.\", \"weaknesses\": \"My major concern on this work is about the contribution of this work:\\n\\n1)\\tIt is not surprising that diffusion-based purification will guide the adversarial images to a place with a larger distance to the original images. As the adversarial perturbation is usually small and the corrupted process (adding Gaussian noise) will make it hard to reverse it to the original image (information on the original image is missing). According to [1,2], the reversion process of diffusion models is to make the generated image has a higher probability to be close to the original image distribution, which does not mean this image should be closer to the original image. Below are some suggested experiments to make this work more comprehensive:\\n\\n- a)\\tBesides traditional l_p distance, the metrics for generative models (e.g., FID score, Inception score) should also be used to evaluate the distance. In such latent space, the conclusion might be different.\\n\\n- b)\\tLines 171-172. I think this experiment is important and should not be hidden.\\n\\n- c)\\tEarly denoiser-based defense [3] has been shown to be ineffective while diffusion-based purification is effective. It is important to show if early denoiser-based defense has the similar behavior (enlarging the lp distance) to diffusion models. If they are similar, the findings in this work may not explain why they have different defense performance.\\n\\n2)\\tThe proposed fuzzy adversarial robustness (a new framework in evaluation) is similar to [1] while the discussion on it is missing.\\n3)\\tThis work claims that the findings offer guidance for the development of more efficient adversarial purification (AP) systems, but no deeper discussions and experiments are provided. If this work can give a prototype method of more efficient AP with the findings, I could have improved my score.\", \"line_450\": \"$\\\\textbf{x}_0$\\uff0c line 269: $\\\\textbf{x}^{`}$\\n\\n[1] Xiao C, Chen Z, Jin K, et al. Densepure: Understanding diffusion models towards adversarial robustness[J]. arXiv preprint arXiv:2211.00322, 2022.\\n\\n[2] Chen H, Dong Y, Wang Z, et al. Robust classification via a single diffusion model[J]. arXiv preprint arXiv:2305.15241, 2023.\\n\\n[3] Liao F, Liang M, Dong Y, et al. Defense against adversarial attacks using high-level representation guided denoiser[C] CVPR. 2018: 1778-1787.\\n\\n[4] DiffHammer: Rethinking the Robustness of Diffusion-Based Adversarial Purification. NeurIPS, 2024\", \"questions\": \"Please see the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"### **Replies to Questions**\\n\\n* The y-axis in Fig. 1(c) represents the \\u201cdistances to the original clean image\\u201d of purified states using diffusion models. This corresponds to a diffusion model with $t^*=0.1$ in DiffPure on CIFAR-10, involving 100 forward diffusion steps and 100 reverse denoising steps. From steps 0\\u2013100, the distances increase due to noise injection; from steps 100\\u2013200, the image becomes clearer as the model denoises. The distance at step 0 reflects the initial adversarial perturbation ($\\\\ell_\\\\infty=8/255$, corresponding in $\\\\ell_2$). \\n\\n* Your intuition is correct: diffusion models perform suboptimally without desired noise levels. This is evident from a drop in clean accuracy when omitting the forward process and only performing the reverse process. This behavior varies by dataset (e.g., a greater drop on ImageNet than CIFAR-10) and checkpoint quality. Below are data from prior experiments: \\n\\n| Dataset | Purification Method | t | Clean Acc. | BPDA | BPDA-EOT |\\n|---------------|--------------------------------|------|------------|-------|----------|\\n| CIFAR-10 | DiffPure (Fix-noise) | 150 | 79.61 | 68.00 | 67.20 |\\n| CIFAR-10 | Reverse-only (Fix-noise) | 150 | 81.92 | 74.20 | 74.90 |\\n| ImageNet | DiffPure (Fix-noise) | 150 | 68.50 | 34.00 | -- |\\n| ImageNet | Reverse-only (Fix-noise) | 100 | 57.10 | 31.00 | -- |\", \"note\": \"Noise was fixed in these experiments to isolate randomness effects (thus affecting robustness but not clean accuracy).\\n\\nOverall, reverse-only diffusion achieved better clean accuracy and robustness on CIFAR-10, with a slightly worse robust accuracy on ImageNet. These findings align with our claims, and additional results will be included in the Appendix of the final version. \\n\\n* Yes, Fig 5(b) shows the explained variance of the PC eigenvectors, corresponding to the PCA analysis. Fig 5(c) displays the eigenvalues of Jacobian matrices, corresponding to the Jacobian analysis. \\u201cExplained\\u201d refers to the variances of data once projected onto corresponding PC directions.\\n\\nThanks again for your review. Please feel free to ask any further questions and we are always happy to discuss.\\n\\n***\\n### **References**\\n\\n[1] Zahra Kadkhodaie, Florentin Guth, Eero P Simoncelli, and St\\u00e9phane Mallat. Generalization in diffusion models arises from geometry-adaptive harmonic representations. In The Twelfth International Conference on Learning Representations, 2024.\\n\\n[2] Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4), 600-612.\\n\\n[3] Cohen, Jeremy, Elan Rosenfeld, and Zico Kolter. \\\"Certified adversarial robustness via randomized smoothing.\\\" International Conference on Machine Learning. PMLR, 2019.\\n\\n[4] Salman, Hadi, et al. \\\"Provably robust deep learning via adversarially trained smoothed classifiers.\\\" Advances in Neural Information Processing Systems 32 (2019).\\n\\n[5] Florian Tram\\u00e8r, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. The space of transferable adversarial examples. arXiv preprint arXiv:1704.03453, 2017.\"}", "{\"comment\": \"We thank the reviewer for the thoughtful feedback and appreciate the opportunity to address the concerns raised. Below, we provide detailed responses to the key weaknesses identified.\\n\\n### **Replies to Weaknesses**\\n\\n#### **Clarification on the Misclaim** \\n- We would like to clarify regarding the misclaim about prior explanations of robustness improvement under diffusion models, which was also highlighted by reviewer **aGTV**. Our intent was to highlight that prior works offered intuitive explanations, which *motivated* us to investigate whether diffusion models increase or decrease $\\\\ell_p$ norms during adversarial purification. This question remains under-investigated to the best of our knowledge. \\n\\n We agreed that prior works did not claim, nor experimentally demonstrate, that \\u201cdiffusion models improve robustness by decreasing $\\\\ell_p$ norms to clean images.\\u201d To ensure clarity, we plan to rephrase **Sec. 3.1** as follows: \\n\\n > \\u201cWhile the exact mechanisms for robustness improvement under diffusion models remain unclear, intuitive explanations have been discussed in the DiffPure paper [Nie et al., 2022], e.g., diffusion models ``recover clean images through the reverse denoising process.'' This motivates us to test a simple hypothesis: diffusion models shrink the $\\\\ell_p$ distances towards clean images during adversarial purification. \\u201d\\n\\n We thank both reviewers for pointing out this ambiguity. If there are further suggestions for refining this section, we welcome your input. We believe this revision will effectively address the concern while preserving the integrity of our experiments and main arguments.\\n\\n\\n- Regarding the theoretical explanation in Theorem 3.1 of DiffPure [1], we appreciate the reviewer drawing attention to its significance. However, we believe it does not fully explain how diffusion models enhance adversarial robustness. Specifically, the theorem addresses the forward process and shows that adding Gaussian noise reduces the $\\\\ell_p$ distances between two distributions. However, it does not account for the fact that diffusion models with only the **reverse denoising process** can also improve robustness (line 172 of our paper, a finding also reported in DensePure [2]). Additional experimental results highlighting this phenomenon will be included in the Appendix, as suggested by reviewers **7QBf** and **iBpK**. \\n\\n Our proposed explanations of the **adversarial compression effect** and the **hyperspherical cap model** provide alternative perspectives on the reverse denoising process and its role in adversarial robustness. We view these results as complementary insights of the original theoretical work developed in the DiffPure paper. \\n\\n- In response to suggestions from reviewers **7QBf** and **cqB1**, we evaluated the **structural similarity index measure (SSIM)** [2], a perceptual metric, in addition to $\\\\ell_p$ distances. The results are summarized below: \\n\\n | Distance (to clean images) | Adversarial (PGD-EOT, $\\\\ell_\\\\infty=8/255$) | Random (uniform, $\\\\ell_\\\\infty=8/255$) | Purified states |\\n |-------------------------------------|------------------------------------------------------------|-----------------------------------------------------------|----------------------|\\n | **SSIM** | 0.963 $\\\\pm$ 0.030 | 0.966 $\\\\pm$ 0.032 | 0.791 $\\\\pm$ 0.085 | \\n\\n As shown, we observed an approximately 20% decrease in SSIM after diffusion purification. This interesting complementary finding will be incorporated into the final version of the paper. \\n\\n However, we continue to emphasize the importance of $\\\\ell_p$ distance measurements in the context of adversarial purification. This is because adversarial attacks are inherently constructed based on $\\\\ell_p$ distances. Diffusion models do not merely convert adversarial perturbations into smaller $\\\\ell_p$ distances, which would simplify the problem; instead, they purify states, making them **perceptually closer** to the original images while potentially increasing $\\\\ell_p$ distances. This highlights a unique aspect of their operation that we believe warrants further exploration.\"}", "{\"comment\": \"We thank the reviewer for providing thoughtful comments and constructive feedback. Below, we address your questions and concerns in detail.\\n\\n### **Replies to Weaknesses**\\n\\n* **Advanced Sampling Techniques**\\nSampling techniques beyond DDPG generally involve continuous or deterministic methods. Continuous methods, such as those solving stochastic differential equations (SDEs) [1], may introduce gradient masking effects during adversarial purification, as observed in adversarial purification with neural ODEs [2]. Hence, our study focuses on discrete sampling methods. To evaluate deterministic methods, we measured the $\\\\ell_p$ distances of purified images using the DDIM sampling approach [3]: \\n\\n| Sampling Method | Clean Accuracy | $\\\\ell_2$ to Clean Images | $\\\\ell_\\\\infty$ to Clean Images |\\n|-----------------|----------------|--------------------------|-----------------------------|\\n| **DDIM** | 87.9% | 2.895 \\u00b1 0.456 | 0.249 \\u00b1 0.041 | \\n\\nThe results show that DDIM achieves comparable clean accuracy to DDPG on CIFAR-10, with $\\\\ell_2$ and $\\\\ell_\\\\infty$ distances to clean images smaller than DDPG but still greater than the initial adversarial perturbations. These findings are consistent with our general results and highlight the robustness of our observations across different sampling methods. We will include these results in the revised manuscript.\\n\\n* **Future Directions for Diffusion-Based Defenses** \\n- The observed **increase in $\\\\ell_p$ distances** (Fig. 1) and the decisive effect of randomness (Fig. 2) suggest that conventional diffusion models, targeted for image generation, operate on variance scales much larger than typical adversarial perturbations. A potential avenue for future work is to train diffusion models tailored to the low-noise regime, transitioning into the **$\\\\ell_p$ shrinkage** regime (Fig. 1d) to establish true attractor dynamics.\\n\\n- A recent study [4] explored the transition from memorization to generalization in diffusion models, using the bias-free denoising framework [5]. This framework demonstrated that denoising CNNs trained on specific noise levels struggle to generalize to unseen noise levels, an issue mitigated by removing bias terms. Similarly, it is plausible that the lack of $\\\\ell_p$ shrinkage behavior in diffusion models under low noise conditions stems from the presence of bias terms within the U-Net architecture.\\n \\n- Recent studies on the generalization properties of diffusion models [4], using bias-free denoising frameworks [5], suggest that bias terms in U-Net architectures may contribute to $\\\\ell_p$ distance increases and limit robustness improvements. Investigating these biases systematically may lead to more effective defenses. \\n\\n- The identified **adversarial compression effect** offers a practical metric for evaluating purification systems without relying on computationally intensive empirical adversarial attacks. This insight could guide the development of more efficient adversarial purification strategies.\"}", "{\"comment\": \"Thanks for the further experiments. I tend to maintain my current score. It is worth noting that adversarial examples are beyond l_p norm. For example, there are patch-attack and diffusion-based attacks. The different conclusions between lp distance and perceptual-based metrics make me worry about the generalization of the conclusion, especially targeting the attacks beyond l_p attack.\"}", "{\"comment\": [\"I thank the authors for their thorough responses to my review. Here are my replies.\", \"**Clarification on the Misclaim**: I accept the clarification on the misclaim about prior explanations of how diffusion-based purification methods achieve robustness. I acknowledge that this issue can be addressed by moderate revision of the relevant texts in this paper, without significantly changing the related conclusions and contributions. The supplementary experiments in the rebuttal on perceptual distance further complement the understanding of the behavior of these methods.\", \"**The Hyperspherical Cap Model**: I appreciate the authors' attempt to formalize concepts like \\\"adversarial directions\\\" and propose the hyperspherical cap model to depict the adversarial regions, and my major concern is the soundness of these theoretical models. The authors' responses have partly addressed this concern, e.g., by empirically showing that the decision boundary is not likely to be crossed a second time along an adversarial direction within the common perturbation ranges in existing studies. However, as the theoretical models are grounded on limited empirical observations, the significance and contribution of this point may be controversial.\", \"**Adversarial Compression**:\", \"I previously thought that the critical threshold here is the property of the whole purification-classification system. Now I understand the claims related to Fig. 6, where the compression effect of the purification model refers to reducing \\\"the distances of adversarial examples toward the anchor point $f(x _ 0)$\\\", as explained in the rebuttal. I acknowledge that the disentanglement of the compression rate and the critical distance provides a new perspective on the robustness of purification-based defense.\", \"I have read the authors' response to Reviewer 7QBf and pJXW on the possible future directions motivated by the findings and perspectives of this paper. However, according to Fig. 6, an important direction is to ensure that the critical threshold for $f(x _ 0)$ is large enough, but this is not explicitly mentioned by the authors.\", \"An additional concern is that the fuzzy robustness defined in Sec. 4 and the stochastic nature of diffusion models seem not considered in the discussions in Sec. 5 and Sec. 6. Will stochasticity affect these discussions?\", \"Overall, I appreciate the contributions of empirical studies on the intriguing behavior of diffusion models (Sec. 3) and the concept of fuzzy robustness (Sec. 4). The discussions in Sec. 5 and Sec. 6 are intuitive but not rigorous enough, and their significance on understanding and improving diffusion-based purification methods seem to be limited.\", \"**Therefore, I will increase my score from 3 to 5 for now, and I look forward to further discussions with the authors and other reviewers.**\"]}", "{\"comment\": \"### **Replies to Weaknesses**\\n\\n* We would like to first clarify that the sentence is not essential to our key results, as it is just a tentative interpretation of our results. We would be happy to modify or remove it. Here, we are referring to the results from [1] (Figure 2), and the idea is very simple: when the training set is small, diffusion models tend to memorize individual samples in the training set, which is harmful for generalization on the testing set. As the training set increases, diffusion models transition to the generalization phase. The diffusion models typically used are already in this generalization phase, and our experiments further confirmed this observation (not shrinking the $\\\\ell_p$ distances). \\n\\nIn addition, reviewers **7QBf** and **cqB1** suggested measuring perceptual distances in addition to $\\\\ell_p$ distances, and we consider this to be a great idea. We have now measured the **structural similarity index measure (SSIM)** [2] relative to clean images, with the results as follows: \\n\\n| Distance (to clean images) | Adversarial (PGD-EOT, $\\\\ell_\\\\infty=8/255$) | Random (uniform, $\\\\ell_\\\\infty=8/255$) | Purified states |\\n|-------------------------------------|------------------------------------------------------------|-----------------------------------------------------------|----------------------|\\n| **SSIM** | 0.963 $\\\\pm$ 0.030 | 0.966 $\\\\pm$ 0.032 | 0.791 $\\\\pm$ 0.085 | \\n\\nAs shown, we observed an approximately 20% decrease under perceptual distances after diffusion purification. This is an interesting complementary result that we will include in the final version of the paper. However, we feel that the $\\\\ell_p$ distance measurements are still important for understanding adversarial purification, as adversarial attacks are calculated based on $\\\\ell_p$ distances. \\n\\n* Here we are referring to the Pearson correlation coefficients. Regarding the x-y axes, we sampled either (a) 100 different starting points within the initial adversarial ball using the same random seed during diffusion, or (b) started with the original image but used 100 different random seeds during diffusion. For both settings, we acquired 100 purified states and subtracted the original image to obtain 100 purified vectors. We calculated the Pearson correlation matrix (`np.corrcoef`), resulting in the 100-by-100 matrix shown in the figure. \\n\\n* We appreciate the reviewer\\u2019s deep understanding of Carlini\\u2019s work and their clarification, which helped us clarify this issue. We have rephrased the claim \\u201crandomness is a bug\\u201d as \\u201crandomness may obscure gradients, making evaluations challenging\\u201d in **Sec. 4.1** accordingly. \\n\\n* While developing the idea, we also felt it would be beneficial to establish a stronger linkage between randomized smoothing and fuzzy robustness. Despite similarities, there are key differences between these concepts: \\n\\n1. **Randomized smoothing** [3] does not necessarily conduct adversarial attacks but relies on numerous evaluations with Gaussian-noisy inputs, categorizing it under *certified robustness*. In contrast, **fuzzy adversarial robustness** evaluates the probability (fuzziness) of adversarial examples for a stochastic system, requiring adversarial attacks first and falling under *empirical adversarial robustness*. \\n2. The **source of randomness** differs: randomized smoothing adds noise to inputs for classifier smoothing, while fuzzy robustness arises inherently from stochastic systems without input noise (no smoothing). Fuzziness is meaningful only for stochastic systems. \\n3. **SmoothAdv** [4], though calculating adversarial attacks, aims to enhance robustness via adversarial training with randomized smoothing, remaining under certified robustness. Fuzzy robustness, however, evaluates robustness directly from the stochastic behavior of the model, without smoothing. \\n\\nWe believe linking the grades of the strongest empirical attack (PGD-EOT) to the certified bound of randomized smoothing would significantly strengthen the concept of fuzzy adversarial robustness. We will add a discussion on this point and add the relevant citations in **Sec. 4.1**.\"}", "{\"summary\": \"This paper examined the properties of diffusion models used in defensed against adversarial attacks. The authors showed that diffusion models do not shrinkage the distance of the transformed images toward clean images. Instead, diffusion models force the images to a compressed latent space, thus enhancing adversarial robustness. Altogether, this paper provides an explanation as to why diffusion models improve empirical adversarial robustness.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This paper made an interesting observation on adversarial defenses based on diffusion models, where the purified images have larger l2 distance to the clean images than the adverasrial images. This suggests that diffusion models do not increase robustness by simply removing noise.\", \"The paper is well-written and motivated, providing clear empirical evidence and concrete discussion as to how diffusion models work on improving adversarial robustness.\"], \"weaknesses\": [\"In Figure 1, the authors show that the l_2 distance to the original clean images increase after diffusion purification. However, the authors do not consider the effect of the sampling process of diffusion models, e.g., deterministic vs random, or more advanced sampling method [1]. It is encouraged to validate the observation on more sampling methods of diffusion models.\", \"It might be good to have some practical discussions on how the observations/understanding from this paper could contribute to the development of future defenses, or even improving existing diffusion model based defenses.\", \"[1] Elucidating the design space of diffusion-based generative models. 2022.\"], \"questions\": \"- Could the authors evaluate the one-shot denoising approach as used by [1] and validate if the claim that l2 distance between purified images and clean images decrease as compared to that between adversarial images and clean images?\\n- As indicated by [1], one reason why diffusion model could provided adversarial robustness is that the denoised images can be well classified by the base classifier. Therefore, would it possible that the actual distance decrease under a perceptual based losses [2]? \\n\\n[1] Certified adversarial robustness for free. Carlini et al, 2023. \\n[2] Perceptual losses for real-time style transfer and super-resolution. Johnson et al, 2016.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper provides theoretical and empirical studies on the mechanism of diffusion-based adversarial purification methods. First, based on empirical studies on the behavior of diffusion-based purification models, it is suggested that the purification generally increases the $\\\\ell_p$ distance of the input sample to the clean sample, and that the purification results are affected by randomness in diffusion models. Second, the concept of fuzzy robustness and the corresponding evaluation method are proposed for diffusion-based purification methods. Third, a hyperspherical cap model is introduced to depict the adversarial regions. Finally, it is argued that the robustness brought by diffusion-based purification is due to the compressing of image space.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The observation of the increasing $\\\\ell_p$ distance to the clean sample produced by diffusion-based purification is interesting.\", \"The definition of fuzzy adversarial robustness is meaningful and the proposed CFR curve can be a practical evaluation metric for stochastic defense methods.\", \"The figures clearly demonstrate the ideas of the paper.\"], \"weaknesses\": [\"The description of the previous theoretical result in (Nie et al., 2022) is unfaithful.\", \"In Section 3.1, it is stated that the previous explanation of the robustness of diffusion-based purification methods lies in the \\\"shrinkage of the adversarial attack\\\", or more specifically, the decreasing $\\\\ell_p$ distance to the clean sample after purification.\", \"Nonetheless, Theorem 3.1 in (Nie et al., 2022) only suggests that the divergence between the clean data distribution and adversarial sample distribution can be decreased by the *forward diffusion process*. It is not assumed that the purified sample obtained by the *diffusion-denoising purification process* is closer to the clean sample under $\\\\ell_p$ distance.\", \"From my perspective, while the purified sample is expected to lie on the non-adversarial manifold, its $\\\\ell_p$ distance to the clean sample is not important as long as the semantic information is preserved.\", \"The hyperspherical cap model is not well supported.\", \"Crossing the critical threshold along an adversarial direction can be a necessary condition for adversarial samples, but it is insufficient. Specifically, Assumption 2 is valid locally given the continuity of the model, but there can be an upper bound on radius $\\\\hat{\\\\gamma}(x_0, \\\\eta)$ determined by $x_0$ and the direction $\\\\eta$ for this assumption to hold. In other words, the sample may cross the boundary again as the radius exceeds $\\\\hat{\\\\gamma}(x_0, \\\\eta)$. An example of such a decision boundary is depicted in Figure 1 of [1].\", \"Therefore, unless a non-trivial uniform upper bound $\\\\hat{\\\\gamma}$ is derived, it cannot be claimed that crossing the critical threshold is a sufficient condition for adversarial examples within an $\\\\ell_2$ neighborhood with a certain radius.\", \"It is also assumed that \\\"the classification boundaries are locally linear\\\" (Lines 375-376), which is not supported by valid theoretical or empirical evidence. While the decision boundaries depicted in the 2D projection in Figure 4(b,c) appear to be linear, they are likely non-linear in the high-dimensional input space. Stronger evidence or proper references are required to claim this point.\", \"The causal relationship between the compression of image space and the improved robustness is not well explained.\", \"Section 6 has validated that diffusion-based purification can compress the image space, but it's still not apparent how it contributes to the robustness. For example, while the magnitude of the critical threshold can be reduced due to the compression, the purified sample $f(x_0)$ may also lie closer to the decision boundary, which cannot explain the improved robustness.\", \"It is better to clarify whether the compression effect is the sole contributor to the robustness of diffusion-based purification models.\", \"The little-o notation in Line 717 is inappropriate since two real numbers are compared here instead of two growing functions. The \\\"considerably larger\\\" change of logit should be better defined.\", \"[1] Kim, Hoki, Woojin Lee, and Jaewook Lee. \\\"Understanding catastrophic overfitting in single-step adversarial training.\\\" AAAI 2021.\"], \"questions\": [\"If image space compressing is sufficient for improving the robustness, are conventional image compressing methods like JPEG effective for adversarial defense?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"### **Replies to the Weaknesses**\", \"Yes, we acknowledge that the theoretical explanation in our work is not entirely comprehensive. However, we believe it represents a step forward toward understanding the role of diffusion models in adversarial purification. Specifically:\", \"We developed the **hyperspherical cap model** of adversarial regions (Sec. 5).\", \"We identified the **adversarial compression effect** (Sec. 6).\"], \"these_insights_allowed_us_to_pinpoint_two_key_factors_influencing_robustness_differences_across_individual_images\": \"1. **Adversarial Compression Rate**\\u2014dictated by the purification system. \\n 2. **Critical Threshold**\\u2014determined by the classification system. \\n Together with the decisive effect of randomness (Fig. 2), these aspects were not sufficiently recognized by previous studies.\\n\\n* Following the suggestions from [7], we implemented PGD-EOT attacks as an approximation of AutoAttack and found them generally effective. Additionally, we want to emphasize the distinctions between **fuzzy robustness** and **DensePure** [6]: \\n - **DensePure** uses the reverse process of diffusion models as an off-the-shelf denoiser within the denoised smoothing framework [8]. It relies on numerous evaluations with Gaussian-noisy inputs and belongs to the category of *certified robustness*. \\n - In contrast, **fuzzy adversarial robustness** evaluates the probability (*fuzziness*) of adversarial examples in stochastic systems. It requires conducting adversarial attacks first and thus falls under *empirical adversarial robustness*. \\n While there are some conceptual connections, applying the notion of fuzzy adversarial robustness to DensePure may not be entirely appropriate.\\n\\n### **Replies to the Questions**\\n\\n* **PGD-EOT attacks on ImageNet** \\n Unfortunately, calculating full gradients on ImageNet with diffusion models would not be feasible given our our computational resources. As noted in Appendix B, it took around 10 days to compute full PGD-EOT gradients on CIFAR-10 with 100 denoising steps. ImageNet images are 64 times larger and require 1.5\\u00d7 times of denoising steps, which would not be pratical with our current setup. \\n This limitation has been highlighted in prior works and underscores the challenge of properly estimating robustness at this scale [7]. As a result, we only included BPDA-EOT results for ImageNet in our submission.\\n\\n* **Behavior of $\\\\ell_p$ distances in latent space** \\n This is an excellent question. We assume you were referring to latent structures similar to those studied in diffusion models [9]. We speculate that the behavior would likely depend on the relative scale of the latent space: \\n - If the latent space scale is similar to the image space, there may be an **increase** in $\\\\ell_p$ distances. \\n - If the latent space has a larger scale comparable to the inherent variation induced by the diffusion process (as in Fig. 1d), there may be a **decrease** in $\\\\ell_p$ distances. \\n This opens an interesting avenue for future investigation. \\n\\nPlease feel free to ask any further questions and we are always happy to discuss.\\n***\\n### **References**\\n\\n[1] Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4), 600-612.\\n\\n[2] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations, 2020a\\n\\n[3] Huang, Y., Yu, Y., Zhang, H., Ma, Y., & Yao, Y. (2022, April). Adversarial robustness of stabilized neural ode might be from obfuscated gradients. In Mathematical and Scientific Machine Learning (pp. 497-515). PMLR.\\n\\n[4] Cohen, Jeremy, Elan Rosenfeld, and Zico Kolter. \\\"Certified adversarial robustness via randomized smoothing.\\\" International Conference on Machine Learning. PMLR, 2019.\\n\\n[5] Salman, Hadi, et al. \\\"Provably robust deep learning via adversarially trained smoothed classifiers.\\\" Advances in Neural Information Processing Systems 32 (2019).\\n\\n[6] Xiao, C., Chen, Z., Jin, K., Wang, J., Nie, W., Liu, M., ... & Song, D. (2023). Densepure: Understanding diffusion models for adversarial robustness. In The Eleventh International Conference on Learning Representations.\\n\\n[7] Robust Evaluation of Diffusion-Based Adversarial Purification, ICCV 2023.\\n\\n[8] Salman, H., Sun, M., Yang, G., Kapoor, A., & Kolter, J. Z. (2020). Denoised smoothing: A provable defense for pretrained classifiers. Advances in Neural Information Processing Systems, 33, 21945-21957.\\n\\n[9] Chen, X., Liu, Z., Xie, S., & He, K. (2024). Deconstructing denoising diffusion models for self-supervised learning. arXiv preprint arXiv:2401.14404.\"}", "{\"comment\": \"We sincerely appreciate your constructive reviews and found them highly insightful. We especially thank you for recognizing the main contributions of our paper, despite the problems in comprehensively interpreting prior works.\\n\\n### **Update on the Misclaim** \\nBefore addressing specific questions, we would like to provide an update regarding a misclaim about the previous explanations of robustness improvement under diffusion models , as this was also a major concern raised by reviewer **aGTV**. Our intent was to convey that prior works have offered intuitive explanations, which *motivated* us to investigate whether diffusion models increase or decrease the $\\\\ell_p$ norms. \\n\\nBy \\\"intuitive explanations\\\", we are referring to statements such as those in the DiffPure framework illustration (Figure 1), where the authors describe diffusion models as \\u201crecovering clean images through the reverse denoising process.\\u201d We fully acknowledge that prior works did not claim that \\u201cdiffusion models improve robustness by decreasing $\\\\ell_p$ norms to clean images,\\u201d as they did not perform such experiments. \\n\\nTo clarify this point, we plan to rephrase **Sec 3.1** as follows: \\n\\n> While the exact mechanisms for robustness improvement under diffusion models remain unclear, intuitive explanations have been discussed in the DiffPure paper [Nie et al., 2022], e.g., diffusion models ``recover clean images through the reverse denoising process.'' This motivates us to test a simple hypothesis: diffusion models shrink the $\\\\ell_p$ distances towards clean images during adversarial purification. \\n\\nWe thank both reviewers for pointing out this ambiguity. Please feel free to suggest any further refinements to this section. We believe this rephrasing will adequately resolve the concern and ensure clarity without affecting our experimental results or main arguments.\"}", "{\"comment\": \"We sincerely thank the reviewer for their invaluable feedback. Your concern regarding distances beyond $\\\\ell_p$ norms in the context of adversarial robustness is valid and potentially opens an interesting avenue for further exploration. While we acknowledge that $\\\\ell_p$ distances may not encompass all possible measures of robustness, we believe that they address the majority of cases relevant to adversarial attacks. Additionally, our findings showing an increase in $\\\\ell_p$ distances alongside a decrease in perceptual-based metrics (e.g., SSIM) provide a compelling contrast that highlights the unique behavior of current diffusion models. Nonetheless, we recognize this topic is open to debate and welcome further discussion.\\n\\nWe also would like to emphasize that $\\\\ell_p$ distances form only a small portion of our overall results. They serve as an intriguing starting point for introducing the concept of **reference points** (purified clean images) in Section 6. Beyond this, we feel that the decisive effect of randomness discussed in Section 3.2, the exploration of fuzzy robustness in Section 4, and the adversarial cap model and compression effects detailed in Sections 5 and 6 are equally critical to our contribution.\\n\\nWe encourage further discussion among the reviewers on these points and would be happy to provide additional clarifications or address any further questions. Thank you once again for your thoughtful feedback.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We appreciate the reviewer\\u2019s continued engagement and insightful questions, which have greatly enriched the discussion. Below, we provide further clarifications to address the concerns raised:\\n\\n- **Critical Thresholds at $f(x_0)$** \\nWe seek to clarify the reference to \\u201cthe critical threshold for $ f(x_0) $ is large enough.\\u201d Indeed, the diffusion model does not substantially alter the critical threshold.\", \"there_are_four_critical_thresholds_to_consider_along_adversarial_directions\": \"1. The worst (PGD) of the classifier at $ x_0 $. \\n 2. The worst of the classifier at $ f(x_0) $. \\n 3. The attack threshold for the entire system at $ x_0 $ (before purification). \\n 4. The attack threshold for the entire system at $ f(x_0) $ (after purification). \\n\\nAll four thresholds are positively correlated, meaning samples with smaller critical thresholds typically lie closer to decision boundaries. Moreover, thresholds (3) and (4) are similar in scale, suggesting that the diffusion model tends to move samples along the direction of the decision boundary rather than orthogonal to it. \\nWe will provide a detailed quantification of this effect in the final version, which we hope will further address your question. \\n\\n- **Stochastic and Adversarial Compression** \\nYes, your intuition aligns with our findings. Introducing stochasticity complicates both the conceptual discussion and computational analysis. Thus, as stated at the beginning of Section 6, we focus on studying the compression effect within a specific randomness configuration: \\n\\n> \\u201cNext, we seek to understand how the diffusion models improve robustness in adversarial purification **within a particular randomness configuration**.\\u201d \\n\\nUnder this assumption, the diffusion model effectively operates as a deterministic mapping, as illustrated in Figure 2c. This approach simplifies the analysis while preserving the core findings related to compression.\\n\\nPlease let us know if further clarifications or data would be helpful. We welcome continued feedback and are grateful for this constructive dialogue.\"}", "{\"comment\": \"We thank the reviewer for providing invaluable suggestions on improvements. Below, we address the weaknesses and concerns in detail.\\n\\n### **Replies to Weaknesses**\\n1. **Follow-up Experiments on $\\\\ell_p$ Distances Measurements**\\n* Perceptual-based metrics\\n\\nCalculating the perceptual-related distances is a great idea, as also suggested by the reviewer **cqB1** (perceptual loss). To study this question, we evaluated the **structural similarity index measure (SSIM)** [1], a classical perceptual metric in image processing and vision neuroscience, in addition to $\\\\ell_p$ distances. The results are summarized below: \\n\\n| Distance to Clean Images | Adversarial (PGD-EOT, $\\\\ell_\\\\infty=8/255$) | Random (Uniform, $\\\\ell_\\\\infty=8/255$) | Purified States |\\n|----------------------------------|-------------------------------------------|---------------------------------------|----------------------|\\n| **SSIM** | 0.963 \\u00b1 0.030 | 0.966 \\u00b1 0.032 | 0.791 \\u00b1 0.085 | \\n\\nThe results reveal an approximate 20% decrease in perceptual distances (SSIM) after diffusion purification, indicating that purified images become perceptually closer to clean images despite the increase in $\\\\ell_p$ distances. This observation complements our findings and will be included in the final paper. \\n\\nHowever, we emphasize the importance of $\\\\ell_p$ distance measurements in the context of adversarial purification. This is because adversarial attacks are constructed inherently based on $\\\\ell_p$ distances. Diffusion models do not merely convert adversarial perturbations into smaller $\\\\ell_p$ distances, which would simplify the problem; instead, they purify states, making them **perceptually closer** to the original images while potentially increasing $\\\\ell_p$ distances. This highlights a unique aspect of their operation that we believe warrants further exploration in the future.\\n\\n* Reverse-only diffusion\\n\\nThanks for pointing it out. We have performed these analyses, and add the results of reverse-only diffusion in Appendix C2. A brief summary is provided below:\\n\\n\\n| Sampling Method | $\\\\ell_2$ to Clean Images | $\\\\ell_\\\\infty$ to Clean Images | SSIM |\\n|------------------|----------------|---------------------|--------------------------|\\n| **DDPG (Reverse-only)** | 1.188 $\\\\to$ 3.084 ($\\\\uparrow$) | 0.031 $\\\\to$ 0.273 ($\\\\uparrow$) | 0.963 $\\\\to$ 0.834 ($\\\\downarrow$) |\\n\\nAs shown, reverse-only diffusion also results in increased $\\\\ell_p$ distances and decreased SSIM after purification. This consistency supports the generalizability of our conclusions. \\n\\n* Denoiser-based defense\\n\\nWhile we have not conducted experiments to compare $\\\\ell_p$ distances in denoiser-based defenses, we clarify that our goal is not to claim that increased $\\\\ell_p$ distances are either (i) unique to diffusion models, or (ii) critical for robustness improvements. Instead, our findings rule out the conventional hypothesis that diffusion models act as $\\\\ell_p$ denoisers to convert adversarial perturbations into smaller norms. We hypothesize that differences in robustness may be attributable to adversarial compression rates (Sec 6), though the inherent randomness of diffusion models poses additional challenges in evaluation (Sec 3.2 and Sec 4).\\n\\n2. **Discussion on Fuzzy Robustness**\\n\\nThis point is also suggested by reviewer iBpK. To address this question, we add a discussion with our fuzzy adversarial robustness with randomized smoothing [2] (including DensePure [3]). Despite similarities, there is a key difference between these concepts: **randomized smoothing** does not necessarily conduct adversarial attacks but relies on numerous evaluations with Gaussian-noisy inputs, categorizing it under *certified robustness*. In contrast, **fuzzy adversarial robustness** evaluates the probability (fuzziness) of adversarial examples for a stochastic system, requiring adversarial attacks first and falling under *empirical adversarial robustness*. \\n\\nWe believe linking the grades of the strongest empirical attack (PGD-EOT) to the certified bound of randomized smoothing would significantly strengthen the concept of fuzzy adversarial robustness. We will add a discussion on this point and add the relevant citations in the final version.\"}", "{\"comment\": \"#### **The Hyperspherical Cap Model**\\n- We appreciate the reviewer\\u2019s concern. For the classifier we studied (WideResNet-28-10, standard classifier from RobustBench [4]) on CIFAR-10, we did not observe multiple crossings of the decision boundary along the adversarial direction within the typical scale of adversarial perturbations. Specifically, we observed such effects (e.g., flipping to other classes) only when the $\\\\ell_2$ norm exceeded 5\\u2014much larger than the radius of adversarial examples typically considered (e.g., $\\\\ell_\\\\infty=8/255$, roughly corresponding to $\\\\ell_2=1$). Most class vs. $\\\\ell_2$ distance curves along adversarial directions were step functions, indicating no transitions or at most a single transition within the adversarial ball. \\n\\n We consider this to be an interesting result. Indeed, one might have expected a different outcome before these experiments were done. We will formally quantify these findings in the final version to further support the validity of Assumption 2.\\n\\n- The above observations should address the reviewer\\u2019s concern. If OpenReview permits, we are happy to share the class vs. $\\\\ell_2$ curves directly. Alternatively, we welcome suggestions on specific metrics the reviewer considers important. These results are reproducible, and we will include details of our experimental settings in the paper (WideResNet-28-10, \\\"Standard\\\" $\\\\ell_\\\\infty$ classifier from RobustBench on CIFAR-10, scanning along PGD attacks up to $\\\\ell_2=1$).\\n\\n- We acknowledge the difficulty of sampling points around adversarial directions in high-dimensional spaces due to their sparsity. Inspired by [5], we projected adversarial directions onto 2D planes defined by random directions to study the loss landscape. To address this concern, we refined our methods by sampling multiple 2D slices. Instead of one 2D slice per image, we now use 100 slices, with the same 1000 points per slice. This new method allows us to sample high-dimensional space around the adversarial direction, addressing a key limitation of the previous method we used that was pointed out by the reviewer.\\n\\n This refinement slightly changed the estimated slope of the psychometric function, with $\\\\bar{k} = 5.9106 \\\\pm 1.0737$. Importantly, the new result still supports our claim that the transition is sharp. We will include this new analysis in the revised manuscript. We believe the refined analysis makes this point stronger. We thank you for this suggestion. \\n\\n\\n#### **Adversarial Compression**\\n\\n- We clarify that the critical threshold (Assumption 2) reflects the distance to decision boundaries along adversarial directions and is an inherent property of the classifier. The purification system (e.g., diffusion models) does not alter this threshold. Instead, purification compresses the distances of adversarial examples toward the anchor point $f(x_0)$, preventing them from crossing the threshold. Two factors influence robust/non-robust outcomes: \\n 1. The amount of compression induced by the diffusion model for each image. \\n 2. The critical threshold of the purified clean image $f(x_0)$. \\n As illustrated in Fig. 6, increased compression moves samples further from decision boundaries, leading to improved robustness. The reviewer\\u2019s intuition is correct that distances to the decision boundary at $f(x_0)$ significantly affect robustness outcomes.\\n\\n- We emphasize that our paper identifies two critical effects: \\n 1. **Adversarial compression rate** (a property of the purification system). \\n 2. **Critical threshold** (a property of the classification system). \\n As shown in Fig. 6, both effects strongly correlate with robustness outcomes, and robustness is not solely determined by compression.\\n\\n#### **Notations**\\n- We thank the reviewer for pointing out the notation issue. To clarify, we propose replacing the small-o notation with the \\\"much less than\\\" symbol ($\\\\lll$). By \\\"considerably large,\\\" we refer to changes in logits along random projections that are negligible compared to changes along adversarial directions. We hope this adjustment makes our argument clearer.\"}", "{\"comment\": \"### **Replies to Questions**\\n\\n- Compression in JPEG vs. Diffusion Models \\n\\n We acknowledge the potential confusion between \\\"compression\\\" in our context and in JPEG. In JPEG, compression refers to reducing storage bits, often by applying a discrete cosine transform and removing high-frequency components, effectively acting as a low-pass filter. In contrast, our use of \\\"compression\\\" refers to shrinking distances in the image space toward purified clean images $f(x_0)$. \\n\\n Regarding the potential robustness of JPEG compression, previous studies [6] suggest that adversarial training biases models toward low-frequency information, which may explain why JPEG could improve robustness. However, we are more interested in studying robustness as an emergent property of learning systems (e.g., denoising or diffusion models) rather than from engineered constraints like low-pass filters or JPEG compression.\\n\\nWe hope our responses address the reviewer\\u2019s concerns satisfactorily. Please feel free to provide further feedback or raise additional questions. \\n\\n---\\n### **References**\\n[1] Nie, W., Guo, B., Huang, Y., Xiao, C., Vahdat, A., & Anandkumar, A. (2022, June). Diffusion Models for Adversarial Purification. In International Conference on Machine Learning (pp. 16805-16827). PMLR.\\n\\n[2] Xiao, C., Chen, Z., Jin, K., Wang, J., Nie, W., Liu, M., ... & Song, D. (2023). Densepure: Understanding diffusion models for adversarial robustness. In The Eleventh International Conference on Learning Representations.\\n\\n[3] Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4), 600-612.\\n\\n[4] Croce, F., Andriushchenko, M., Sehwag, V., Debenedetti, E., Flammarion, N., Chiang, M., ... & Hein, M. (2020). Robustbench: a standardized adversarial robustness benchmark. arXiv preprint arXiv:2010.09670.\\n\\n[5] Li, H., Xu, Z., Taylor, G., Studer, C., & Goldstein, T. (2018). Visualizing the loss landscape of neural nets. Advances in neural information processing systems, 31.\\n\\n[6] Yin, D., Gontijo Lopes, R., Shlens, J., Cubuk, E. D., & Gilmer, J. (2019). A fourier perspective on model robustness in computer vision. Advances in Neural Information Processing Systems, 32.\"}", "{\"metareview\": \"This paper examines how diffusion models improve adversarial robustness, challenging the idea that these models bring adversarial samples closer to clean images. Instead, the authors argue that purified images actually move farther away in $\\\\ell_p$ space, and that the robustness comes from image space compression and the randomness inherent to diffusion models. They introduce \\\"fuzzy adversarial robustness\\\" to account for the stochastic nature of these models and propose a hyperspherical cap model to explain adversarial regions. These are novel contributions backed by empirical evidence, with an intriguing take on how diffusion-based defenses function.\\n\\nNevertheless, there are some noticable weaknesses. Reviewers aGTV is particularly concerned about the theoretical claims\\u2014like the hyperspherical cap model\\u2014which lack strong mathematical or empirical backing. Also, the fuzzy robustness concept overlaps with prior work on randomized smoothing. Another issue is the limited exploration of practical applications.\\n\\nBased on these points, I concur with most reviewers and recommend rejecting the paper for now. I encourage the authors to consider these comments and revise the paper accordingly for a future venue.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers had a lot of back-and-forth about this paper. On the plus side, they agreed that the observations\\u2014like purified images moving farther from clean ones\\u2014are novel and worth exploring. Reviewer aGTV raised valid concerns about the hyperspherical cap model and whether it\\u2019s supported by evidence. Reviewer 7QBf appreciated the analysis but pointed out gaps, like not comparing denoiser-based defenses. Reviewer cqB1 questioned the limited scope of sampling methods and metrics used, while iBpK noted ambiguities in the figures and flagged overlaps with prior concepts.\\n\\nThe authors responded thoroughly, clarifying many points and adding new experiments, like using perceptual metrics. While some reviewers were satisfied with these efforts and raised their scores, others felt the core issues\\u2014like the lack of theoretical rigor and the practical utility of the findings\\u2014remained unresolved. Ultimately, while the rebuttal helped clarify certain aspects, the paper\\u2019s contributions still feel incomplete. It\\u2019s a solid step forward but not quite ready for publication.\"}", "{\"summary\": [\"This paper presents several observations and explanations for DiffPure, mainly summarized as follows:\", \"Diffusion models increase\\u2014rather than decrease\\u2014the \\u2113p distances to clean samples.\", \"The concept of fuzzy adversarial robustness is introduced.\", \"A hyperspherical cap model of adversarial regions is proposed.\", \"It is shown that diffusion models increase adversarial robustness by compressing the image space.\"], \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"By measuring the distance between denoised examples and adversarial images, this paper shows that DiffPure actually increases the distance between denoised images and adversarial examples, rather than decreasing it, which contradicts the previous claim that \\\"DiffPure increases robustness by decreasing the perturbation budget.\\\" Although I believe **Carlini et al., Xiao et al., and Nie et al. have never made this claim**, I still consider this experiment insightful, as many people agree with this idea.\", \"I strongly appreciate the authors' experiments demonstrating that \\\"randomness in diffusion determines the purification direction but not the randomness in the input.\\\" I believe this is insightful for diffusion researchers.\"], \"weaknesses\": \"This paper contains several ambiguities, overclaims, and misunderstandings of previous work (see both Weakness and Questions sections), but these can likely be quickly addressed during the rebuttal phase. I would consider raising my score if the authors address these issues.\\n\\n- I disagree with the authors' claim that \\\"a model simply encoding every clean image as the prior mode may not generalize well\\\" and \\\"the results above indicate that diffusion models are ineffective in removing small perturbations.\\\" A large distance between diffusion-purified images and real images does not support this claim. Imagine an image of a panda; the direction of hair growth in pandas can vary, so even if the hair direction changes, the image remains realistic. Such subtle changes often occur after diffusion denoising, causing an increase in \\u2113p distance, but not due to poor generalization.\\n\\n- In Fig. 2 (a) and (b), the term \\\"correlation\\\" (e.g., 0.2368, 0.9954 in the paper) is not clearly defined. I'm still unsure of the x and y axes or what is meant by \\\"correlation\\\" here (covariance? cosine similarity? normalized correlation?).\\n\\n- \\\"Some treated randomness as a bug and proposed methods to cancel its effect in evaluation (cite Carlini et al.).\\\" Carlini never stated this. What Carlini means is that randomness may obscure gradients, making evaluations challenging, but Carlini does not claim that \\\"randomness is a bug\\\" or that randomness inherently negates robustness.\\n\\n- The authors' definition of \\\"fuzzy adversarial examples\\\" is already extensively discussed in the context of certified robustness via randomized smoothing. You should cite at least [1, 2], as their definition \\\\( g(x)_c = \\\\text{Pr}(f(x) = c) \\\\) closely aligns with yours. While I recognize the differences between your definition and theirs, citing these works is essential for academic integrity.\\n\\n---\\n\\n**References** \\n[1] Cohen, Jeremy, Elan Rosenfeld, and Zico Kolter. \\\"Certified adversarial robustness via randomized smoothing.\\\" International Conference on Machine Learning. PMLR, 2019. \\n[2] Salman, Hadi, et al. \\\"Provably robust deep learning via adversarially trained smoothed classifiers.\\\" Advances in Neural Information Processing Systems 32 (2019).\", \"questions\": [\"I'm unclear about Fig. 1(c). The x-axis is labeled as the diffusion noise level, but what is the y-axis? Is it the distance between noisy examples and the original example, or between denoised examples and the original example? Additionally, why does the image become clearer rather than blurrier as \\\\( t \\\\) increases from 50 to 200? Are these images noisy examples or denoised ones? I can't fully understand this figure, and as a result, I'm unable to accurately interpret lines 162-201.\", \"\\\"We removed the forward process by only performing denoising and still observed that the distances to clean samples increased.\\\" Could you please include this figure? In my opinion, diffusion will produce collapsed images if the input noise level does not align with the diffusion network's noise level condition. I\\u2019m really curious to see the result of this experiment.\", \"\\\"Under these assumptions, one can mathematically show that the adversarial regions should form a hyperspherical cap when considering the \\\\( \\\\ell_2 \\\\) neighborhood.\\\" In addition to including the proof in the appendix, I strongly recommend that the authors provide some intuition in the main paper to help readers understand why this holds. Could the authors give further intuition on this result during the rebuttal?\", \"In Fig. 5(b), does the y-axis represent \\\"variance explained\\\" as the eigenvalue? What is meant by \\\"explained\\\" here?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"As mentioned in the title, this paper aims to investigate how and how well diffusion models improve adversarial robustness. Specifically, this paper first demonstrates that diffusion models push the purified images away from clean images, which challenges the previous belief (i.e., diffusion models will push adversarial images closer to clean images). Then, this paper shows that the randomness largely influences the robustness of diffusion models. Based on this, this paper introduces a new concept called 'fuzzy adversarial examples', which considers the probability of adversarial attack fooling the system and proposes a new robust evaluation metric called 'cumulative fuzzy robustness (CFR)' to evaluate the robustness of fuzzy adversarial examples. Lastly, this paper uses a hyperspherical cap model to show that diffusion models improve robustness by shrinking the image space.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. This paper is very well-written and easy to follow.\\n\\n2. This paper provides sufficient insights (as mentioned in the summary) on diffusion-based adversarial purification, which can inspire researchers in this area to develop more advanced defense methods. More importantly, this paper closes an important research gap on how diffusion-based adversarial purification actually works. I would also like to hear other reviewers' opinions on the contributions of this paper.\\n\\n3. The concept of fuzzy adversarial examples is novel and intuitive. Based on this, the proposed cumulative fuzzy robustness (CFR) is a more suitable evaluation metric to examine the robustness of a system with inherent robustness (especially for diffusion-based adversarial purifications).\", \"weaknesses\": \"1. While the observation that the diffusion model further pushes adversarial examples away from clean examples is intriguing, this paper lacks a theoretical explanation of why this occurs. However, this is only a minor weakness, as including a theoretical explanation is not always possible.\\n\\n2. Fuzzy robustness evaluation is only performed on DiffPure, which is a bit outdated. Authors are encouraged to evaluate more recent diffusion-based adversarial purification methods to make results more convincible (e.g., [1] [2]).\\n\\n[1] DensePure: Understanding Diffusion Models Towards Adversarial Robustness, ICLR 2023.\\n[2] Robust Evaluation of Diffusion-Based Adversarial Purification, ICCV 2023.\", \"questions\": \"1. What is the PGD+EOT result on ImageNet?\\n\\n2. Just curious\\u2014do you think the conclusions in this paper would hold up if diffusion models are constructed in latent space instead of pixel space? And why?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We appreciate your positive feedback and are glad to see that our work might be insightful for the community. As you mentioned being interested in other reviewers\\u2019 opinions, we would like to first summarize our updates based on common questions raised by the reviewers.\\n\\n### **Summary of Updates**\\n\\n* **Clarification on the Misclaim** \\nAs noted by reviewers **aGTV** and **iBpK**, we clarified the misclaim regarding $\\\\ell_p$ norms in **Sec. 3.1**. Specifically, we rephrased that paragraph: \\n > While the exact mechanisms for robustness improvement under diffusion models remain unclear, intuitive explanations have been discussed in the DiffPure paper [Nie et al., 2022], e.g., diffusion models ``recover clean images through the reverse denoising process.'' This motivates us to test a simple hypothesis: diffusion models shrink the $\\\\ell_p$ distances towards clean images during adversarial purification. \\n\\nThis addresses concerns about ambiguity and better aligns with prior works without impacting experimental results or key arguments.\\n\\n* **New Perceptual Distance Measurements** \\nResponding to feedback from reviewers **7QBf** and **cqB1**, we included **structural similarity index measure (SSIM)** [1] results alongside $\\\\ell_p$ distances. These measurements provide complementary insights into the purification process and will be incorporated into the final version. \\n\\n | Distance (to clean images) | Adversarial (PGD-EOT, $\\\\ell_\\\\infty=8/255$) | Random (uniform, $\\\\ell_\\\\infty=8/255$) | Purified states |\\n |-------------------------------------|------------------------------------------------------------|-----------------------------------------------------------|----------------------|\\n | **SSIM** | 0.963 $\\\\pm$ 0.030 | 0.966 $\\\\pm$ 0.032 | 0.791 $\\\\pm$ 0.085 | \\n\\n* **Additional Results of Reverse-Only Diffusion and DDIM Sampling** \\nResponding to feedback from reviewers **7QBf** and **iBpK**, we included additional experimental results with reverse-only diffusion models in the appendix. Results with other sampling techniques, such as **DDIM** [2], were also appended in response to reviewer **cqB1**, as other SDE-based samplings may raise concerns in gradient masking from the numerical solver [3].\\n\\n* **Discussion on Fuzzy Adversarial Robustness** \\nWe elaborated on the distinction between **randomized smoothing** [4] and **fuzzy adversarial robustness**, emphasizing differences in their motivation, procedure, and source of randomness. The added discussion strengthens the conceptual framing and includes citations to related works, such as **SmoothAdv** [5] and **DensePure** [6].\\n\\nThe remaining questions are addressed separately in our detailed responses.\"}", "{\"comment\": \"### **Replies to Questions**\\n* **One-Step Denoising** \\nWe conducted experiments using the one-step denoising approach [6] and obtained the following results: \\n\\n| Sampling Method | Clean Accuracy | PGD-EOT Robustness | $\\\\ell_2$ to Clean Images | $\\\\ell_\\\\infty$ to Clean Images |\\n|------------------|----------------|---------------------|--------------------------|-----------------------------|\\n| **DDPG (One-Step)** | 85.2% | 48.3% | 2.692 \\u00b1 0.444 | 0.239 \\u00b1 0.044 | \\n\\nOne-step denoising significantly reduced computation time, making adversarial attack evaluations more tractable. While clean accuracy remained comparable, adversarial robustness (PGD-EOT) decreased slightly, consistent with our findings. These results reinforce the broader applicability of our conclusions. \\n\\n* **Perceptual-Based Metrics**\\nCalculating the perceptual-related distances is a great idea, as also suggested by the reviewer **7QBf** (FID score). To study this question, we evaluated the **structural similarity index measure (SSIM)** [7], a widely used perceptual metric in computer vision, in addition to $\\\\ell_p$ distances. The results are summarized below: \\n\\n| Distance to Clean Images | Adversarial (PGD-EOT, $\\\\ell_\\\\infty=8/255$) | Random (Uniform, $\\\\ell_\\\\infty=8/255$) | Purified States |\\n|----------------------------------|-------------------------------------------|---------------------------------------|----------------------|\\n| **SSIM** | 0.963 \\u00b1 0.030 | 0.966 \\u00b1 0.032 | 0.791 \\u00b1 0.085 | \\n\\nThe results reveal an approximate 20% decrease in perceptual distances (SSIM) after diffusion purification, indicating that purified images become perceptually closer to clean images despite the increase in $\\\\ell_p$ distances. This observation complements our findings and will be included in the final paper. \\n\\nHowever, we would like to emphasize the importance of $\\\\ell_p$ distance measurements in the context of adversarial purification. This is because adversarial attacks are inherently constructed based on $\\\\ell_p$ distances. Diffusion models do not merely convert adversarial perturbations into smaller $\\\\ell_p$ distances, which would simplify the problem; instead, they purify states, making them **perceptually closer** to the original images while potentially increasing $\\\\ell_p$ distances. This highlights a unique aspect of their operation that we believe warrants further exploration.\\n\\n---\\n### **References**\\n[1] Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., & Poole, B. (2020). Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456.\\n\\n[2] Huang, Y., Yu, Y., Zhang, H., Ma, Y., & Yao, Y. (2022, April). Adversarial robustness of stabilized neural ode might be from obfuscated gradients. In Mathematical and Scientific Machine Learning (pp. 497-515). PMLR.\\n\\n[3] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations, 2020a\\n\\n[4] Zahra Kadkhodaie, Florentin Guth, Eero P Simoncelli, and St\\u00e9phane Mallat. Generalization in diffusion models arises from geometry-adaptive harmonic representations. In The Twelfth International Conference on Learning Representations, 2024.\\n\\n[5] Mohan, S., Kadkhodaie, Z., Simoncelli, E. P., & Fernandez-Granda, C. (2019). Robust and interpretable blind image denoising via bias-free convolutional neural networks. arXiv preprint arXiv:1906.05478.\\n\\n[6] Carlini, N., Tramer, F., Dvijotham, K. D., Rice, L., Sun, M., & Kolter, J. Z. (2022). (certified!!) Adversarial robustness for free!. arXiv preprint arXiv:2206.10550.\\n\\n[7] Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4), 600-612.\"}", "{\"comment\": \"3. **Future Directions for Diffusion-Based Defenses**\\n- The observed **increase in $\\\\ell_p$ distances** (Fig. 1) and the decisive effect of randomness (Fig. 2) suggest that conventional diffusion models, targeted for image generation, operate on variance scales much larger than typical adversarial perturbations. A potential avenue for future work is to train diffusion models tailored to the low-noise regime, transitioning into the **$\\\\ell_p$ shrinkage** regime (Fig. 1d) to establish true attractor dynamics.\\n \\n- A recent study [4] explored the transition from memorization to generalization in diffusion models, using the **bias-free denoising** framework [5]. It was shown that the bias terms in U-Net architectures hindered the denoising model from generalizing to other unseen noise levels. As we illustrated the scales of adversarial perturbation were considerably larger than the inherent randomness of diffusion models, the bias terms in the diffusion model might limit robustness improvements. Investigating these biases could lead to more effective defenses.\\n\\n- The identified **adversarial compression effect** offers a practical metric for evaluating purification systems without relying on computationally intensive empirical adversarial attacks. This insight could guide the development of more efficient adversarial purification strategies.\\n\\nPlease feel free to ask further questions and we are happy to receive your feedback.\\n\\n---\\n### **References**\\n[1] Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4), 600-612.\\n\\n[2] Cohen, Jeremy, Elan Rosenfeld, and Zico Kolter. \\\"Certified adversarial robustness via randomized smoothing.\\\" International Conference on Machine Learning. PMLR, 2019.\\n\\n[3] Xiao C, Chen Z, Jin K, et al. Densepure: Understanding diffusion models towards adversarial robustness[J]. arXiv preprint arXiv:2211.00322, 2022.\\n\\n[4] Zahra Kadkhodaie, Florentin Guth, Eero P Simoncelli, and St\\u00e9phane Mallat. Generalization in diffusion models arises from geometry-adaptive harmonic representations. In The Twelfth International Conference on Learning Representations, 2024.\\n\\n[5] Mohan, S., Kadkhodaie, Z., Simoncelli, E. P., & Fernandez-Granda, C. (2019). Robust and interpretable blind image denoising via bias-free convolutional neural networks. arXiv preprint arXiv:1906.05478.\"}" ] }
EV7FMBZxnx
Reveal Object in Lensless Photography via Region Gaze and Amplification
[ "Yin Xiangjun", "Huihui Yue" ]
Detecting concealed objects, such as in vivo lesions or camouflage, requires customized imaging systems. Lensless cameras, being compact and flexible, offer a promising alternative to bulky lens systems. However, the absence of lenses leads to measurements lacking visual semantics, posing significant challenges for concealed object detection (COD). To tackle this issue, we propose a region gaze-amplification network (RGANet) for progressively exploiting concealed objects from lensless imaging measurements. Specifically, a region gaze module (RGM) is proposed to mine spatial-frequency cues informed by biological and psychological mechanisms, and a region amplifier (RA) is designed to amplify the details of object regions to enhance COD performance. Furthermore, we contribute the first relevant dataset as a benchmark to prosper the lensless imaging community. Extensive experiments demonstrate the exciting performance of our method. Our codes will be released in \url{https://github.com/YXJ-NTU/Lensless-COD}.
[ "Lensless Imaging; Computational Imaging; Region Gaze; Region Amplifier; Concealed Object Detection" ]
Accept (Poster)
https://openreview.net/pdf?id=EV7FMBZxnx
https://openreview.net/forum?id=EV7FMBZxnx
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wMCdJHFJ3r", "tlfLUJuH3x", "qrzP2aRi4Z", "o1zJQPepJh", "ic29xArrct", "eitEkaCOiC", "ZVTTURvW1W", "XYcH0COKoJ", "TFwehJeSbK", "RdVFRogvjB", "JdiHGz5LYh", "HflSNoDgPR", "FZw4vc2sD0", "ByU5wFGI2z", "AzB9g7sSWn", "8jA57mGwPG" ], "note_type": [ "meta_review", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1734790955004, 1732523681695, 1737523759336, 1730804913578, 1732640837922, 1732523636330, 1732523709198, 1732547013741, 1732524801021, 1733199971078, 1732524830661, 1732640917412, 1730189996821, 1733203240298, 1732521256638, 1730606660687 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6284/Area_Chair_my2m" ], [ "ICLR.cc/2025/Conference/Submission6284/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6284/Reviewer_pe4y" ], [ "ICLR.cc/2025/Conference/Submission6284/Authors" ], [ "ICLR.cc/2025/Conference/Submission6284/Authors" ], [ "ICLR.cc/2025/Conference/Submission6284/Authors" ], [ "ICLR.cc/2025/Conference/Submission6284/Reviewer_EKVY" ], [ "ICLR.cc/2025/Conference/Submission6284/Authors" ], [ "ICLR.cc/2025/Conference/Submission6284/Authors" ], [ "ICLR.cc/2025/Conference/Submission6284/Authors" ], [ "ICLR.cc/2025/Conference/Submission6284/Authors" ], [ "ICLR.cc/2025/Conference/Submission6284/Reviewer_ZAok" ], [ "ICLR.cc/2025/Conference/Submission6284/Reviewer_EKVY" ], [ "ICLR.cc/2025/Conference/Submission6284/Authors" ], [ "ICLR.cc/2025/Conference/Submission6284/Reviewer_EKVY" ] ], "structured_content_str": [ "{\"metareview\": \"The paper introduces the Region Gaze-Amplification Network (RGANet), a novel method for detecting concealed objects using a lensless camera. The method employs a progressive approach to enhance concealed object detection through advanced feature extraction and amplification techniques. Additionally, the authors propose a new real-capture dataset tailored for concealed object detection (COD) with lensless imaging systems, providing a valuable resource for training and evaluation.\\n\\nThe reviewers praised the paper for its clear presentation and reasonable experimental results. The introduction of a real-capture dataset specifically designed for COD is also expected to benefit the broader research community.\\n\\n Based on the reviewers' unanimous recommendations and the authors' successful rebuttal, I recommend accepting this paper.\", \"additional_comments_on_reviewer_discussion\": \"The authors' responses during the rebuttal addresses part of the reviewers' concerns, like the experimental setup and generalisability of the method. Finally, reviewers unanimously recommend for acceptance.\"}", "{\"comment\": \"**Q2\\uff1aInsufficient experiments.**\", \"we_value_your_suggestions_for_improving_experimental_comparisons_and_have_addressed_your_concerns_below\": \"- **SOTA COD Comparisons:** Figs. 4-5 and Table 1 in the original manuscript already compare multiple SOTA COD methods. As requested, we have added experiments using methods [3] and [4] in the revised manuscript (in the **Appendix A.3, Fig. 10, and Tab. 5**). Unfortunately, [2] lacks publicly available code, preventing direct comparisons. However, we have included a detailed discussion of its methodology to contextualize our findings. Here, **in following Tabs. 1 and 2**, we conduct a brief experiment to demonstrate the performance. The results indicate that both methods remain inferior to our proposed method.\\n- **Two-Stage Methods:** We have conducted comparative experiments with two-stage methods, as detailed in **Appendix A.2 (including Fig. 9 and Tab. 4). **The results show that the complexity of two-stage methods (methods with \\u201cdetection-after-reconstruction strategy\\u201d) has increased severalfold, which is evidently detrimental to practical engineering applications, in terms of FLOPs and parameters. Our one-step method achieves performance metrics within 10% of the best results in some aspects, while significantly reducing computational complexity, making it more advantageous for practical applications. Furtheromre our one-step method enhances privacy protection by eliminating visual information from the process, thereby extending its applicability to real-world scenarios.\\n\\n**Table 1** Comparison result on method listed in comments(FPNet, FSPNet) on Test-Easy dataset\\n| Method | $F_{\\\\beta}^{\\\\omega}$| $M$ | $E_{\\\\xi}$ | $S_{\\\\alpha}$ |\\n|--------|-------------------------------------|---------------------|---------------------------|-----------------------------|\\n| FPNet | 0.741 | 0.113 | 0.837 | 0.797 |\\n| FEDER | 0.757 | 0.105 | 0.845 | 0.816 |\\n| Ours | 0.815 | 0.079 | 0.896 | 0.834 |\\n| |\\n\\n**Table 2** Comparison result on method listed in comments(FPNet, FSPNet) on Test-Hard dataset\\n| Method | $F_{\\\\beta}^{\\\\omega}$| $M$| $E_{\\\\xi}$ | $S_{\\\\alpha}$ |\\n|--------|-------------------------------------|---------------------|---------------------------|-----------------------------|\\n| FPNet | 0.539 | 0.159 | 0.746 | 0.712 |\\n| FEDER | 0.584 | 0.131 | 0.757 | 0.724 |\\n| Ours | 0.705 | 0.098 | 0.845 | 0.770 |\\n| |\", \"reference\": \"[1] Khan Salman, Siddique, Sundar Varun, Boominathan Vivek, Veeraraghavan Ashok, and Mitra Kaushik. Flatnet: Towards photorealistic scene reconstruction from lensless measurements. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(4):1934-1948, 2022.\\n\\n**Q3: Details of the proposed dataset.**\\n\\nThe real dataset utilized in this study and the dataset described in [5] are both derived from the same source: a subset of ImageNet comprising 10k data pairs. In this work, the dataset was curated specifically for COD task requirements, resulting in a refined subset of 2.6k data pairs. Conversely, the dataset in [5] was curated for general object segmentation with broader selection criteria, leading to a larger subset of 5.9k data pairs. While both datasets are sampled from the same original source, some overlap is inevitable. Based on our analysis, the overlap consists of 326 data pairs. We have provided a detailed analysis of these differences in the **Sec 4.1** of revised manuscript to ensure clarity and to highlight the unique focus of our dataset for COD.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The authors introduce a new method for detecting concealed objects using a lensless camera, the Region Gaze-Amplification Network (RGANet), which progressively enhances concealed object detection through well-crafted feature extraction and amplification techniques. A novel real-capture dataset is proposed for training and evaluate the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is clearly written, and the experimental results are compelling. The proposed new real-capture dataset DLCOD will help further research in this field. Additionally, the authors have discussed the limitations of their proposed method in the appendix.\", \"weaknesses\": \"There are some aspects that could benefit from further clarification and enhancement:\\n\\n1. Additional details about the setup of the real-capture experiments would enhance the reproducibility and understanding of the method. Specifically, could the authors provide information on the distance between the PHlatCam and the display, as well as the display's specifications (e.g., size, model, and whether it is an LCD or OLED)?\\n\\n2. Although the model was trained on a real dataset, the data was captured from a display screen. Given that lensless cameras may capture a broader range of wavelengths than standard RGB cameras, will using a screen-based dataset introduce potential bias? The model may be less effective in real-world conditions where wavelengths are not limited to the three produced by RGB displays. It would be beneficial for the authors to conduct additional experiments using non-display-based scenes to validate the model's performance in more natural, unfiltered conditions (qualitative evaluation is not required). If this is not feasible, further discussion of this limitation could be included.\", \"questions\": \"Please refer to the *Weakness* section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your positive feedback. In your response, you mentioned: \\\" *Thanks for your detailed response. While the authors partially addressed my concerns, the contribution of this setting is still limited since existing works have already introduced object detection into lensless imaging. Hence, I tend to keep my rating.* \\\"\\n\\nWe would like to emphasize that **the core contributions of our work do not lie merely in the network modules but rather in a comprehensive and thoughtful integration of contributions across tasks, datasets, engineering applications, and methodological considerations.** We sincerely hope the reviewer can recognize the effort and depth of contribution we have invested in this work.\\n\\nTo further clarify this point, we provide a more detailed explanation of the key contributions underlying our study.\\n\\n**1.Task Contribution**\\n\\nWe would like to emphasize that our work is the first to address camouflaged object detection (COD) in the context of lensless imaging, focusing on the unique challenges associated with high-level reasoning tasks in this domain. This research provides a novel technical pathway for advancing the practical application of lensless imaging technologies.\\n\\nIt is important to acknowledge that some existing studies have indeed explored object reasoning tasks in lensless imaging [1][2][3]. However, these efforts are either limited to basic object classification [1] or conventional object segmentation [2][3], which face significantly fewer challenges compared to our lensless COD framework. Our method represents a paradigm shift, offering a comprehensive solution to enhance lensless object reasoning under complex conditions. **This advancement holds considerable value for addressing intricate object reasoning problems in real-world environments and provides key insights for extending lensless imaging technologies to practical applications such as surveillance, reconnaissance, vivo diagnostics, and IoT systems. **\\n\\nMoreover, as demonstrated by the comparative results, our method significantly outperforms existing methods, including state-of-the-art COD methods, in terms of task performance (e.g., **Tab. 1, Fig. 4, Fig. 5, Tab. 5, Fig. 10, and Fig. 11** ). **We sincerely hope the reviewer will recognize our efforts in enhancing the practicality of computational imaging technologies and advancing task-level contributions in computer vision, and provide a more comprehensive evaluation of our contributions in this new task domain.**\\n\\n**2. Dataset Contribution**\\n\\nWe present the first dataset for lensless COD, named the DLCOD dataset. This dataset establishes a crucial benchmark for evaluating lensless imaging systems in the context of COD tasks, providing valuable insights for expanding both the lensless imaging and COD research communities. **We hope the reviewer recognizes our substantial efforts in broadening the scope of lensless imaging (and by extension,**** emerging** ** computational imaging techniques) within its application domain. We respectfully request the reviewer to reconsider the dataset's contribution and its significant contribution to the field** .\\n\\n**3. Engineering Application Contribution**\\n\\nOur study represents a pioneering attempt to engineer practical applications of lensless imaging technology, offering valuable perspectives on **its deployment in areas such as surveillance, reconnaissance, ****vivo**** diagnostics, IoT systems, and confined-space detection.** Specifically, lensless imaging systems bring unique advantages, including reduced size, cost, and power consumption, alongside unparalleled flexibility in design tailored to specific use cases\\u2014features unattainable by conventional imaging devices. Furthermore, lensless systems inherently support privacy protection, a critical limitation in traditional imaging systems. By integrating lensless imaging devices with task-specific reasoning algorithms, we expand the practical boundaries of inference applications, addressing the demands of complex engineering scenarios. **We sincerely hope the reviewer acknowledges our efforts to advance the engineering applicability of lensless imaging technologies and appreciates the latent value of our work in addressing real-world challenges.**\"}", "{\"comment\": \"We greatly value your profound insights and acknowledgment, and have provided meticulous, point-by-point responses to address each concern with precision and clarity.\\n\\n**Q1\\uff1aAbout Challenge of COD with Lensless Cameras and Contributions**\\n\\n- **Challenge of COD with Lensless Cameras:** Currently, the challenges we face in our work primarily stem from the inherent difficulties of lensless imaging and the complexity of the COD task itself. The primary challenges are: 1) Lensless imaging lacks traditional visual features, making it challenging to extract task-relevant information from the data; 2) The complexity of the data impose greater demands on model training and optimization, particularly in noise suppression and key information retention; And 3) the inherent challenges of the COD task itself. We have revised and clarified this section in the updated version of the \\u201cIntroduction\\u201d.\\n- **Novel Contributions:** Our contributions should be highlighted as follows: This work is the first to address COD specifically tailored for lensless imaging, thereby extending the practical application of lensless imaging technology to inference tasks.\\n For optical-aware feature extraction (OFE), while [1] employs Wiener filter principles for reconstruction tasks, our method fundamentally diverges in its application. Unlike [1], which optimizes for visually improved reconstructed images, our OFE is embedded in a framework jointly trained for COD. This design ensures the extracted features are specialized for COD, aligning with task-specific requirements rather than general-purpose visual fidelity. Moreover, our method leverages these tailored features to enhance COD performance through end-to-end optimization. This method represents a critical advancement in overcoming the unique challenges of lensless imaging, including complex scenes and unconventional data characteristics, which traditional methods struggle to address effectively. We have clarified this distinction in the revised manuscript to better highlight the unique contributions of our method.\\n Regarding the spatial-frequency enhancement module (i.e., Region Gaze Module), we introduced an adaptive thresholding mechanism to distinguish high-frequency information from low-frequency components, unlike the fixed thresholds used in previous studies. This design advantage allows for better differentiation between noise and signal, reducing the impact of noise. Furthermore, we designed a region amplifier module that amplifies areas of interest for secondary recognition, which significantly enhances the reconstruction quality. The ablation study results confirm the notable contributions of the region amplifier to the overall system performance.\"}", "{\"comment\": \"**Questions:**\\n\\n**Q1:Does the performance improvement come from the large parameters and computational cost?.**\\n\\nWhile larger models typically outperform smaller ones, this is not always guaranteed. As shown in our comparative results (Table 1 in manuscript), EyeCoD, with the highest FLOPs, and OCENet, with the largest number of parameters, both exhibit significantly worse performance than our proposed method.\\n\\n**Q2:Error in Figure 2, where the sigmoid function is directly fed to addition without any input in PVT.**\\n\\nThank you for pointing out the error in Fig. 2. We have corrected it in the revised version of the figure and updated the manuscript accordingly. The figure now accurately represents the intended architecture, where the sigmoid function is correctly provided with the necessary input.\"}", "{\"comment\": \"Thanks for your detailed response. While the authors partially addressed my concerns, the contribution of this setting is still limited since existing works have already introduced object detection into lensless imaging. Hence, I tend to keep my rating.\"}", "{\"comment\": \"We sincerely thank you for your valuable comments and recognition, and have provided individual responses to address each concern clearly and thoroughly.\\n\\n**Weaknesses:**\\n\\n**Q1:The use of RGM twice in the network is confusing.**\\n\\nThank you for your feedback. The lensless-based COD task is inherently challenging due to the difficulties of vision-independent imaging in lensless systems and the complexities of COD itself, which conventional methods often struggle to address. Our method, however, improves lensless-based COD accuracy by utilizing two RGMs in a structured and purposeful manner. The first RGM performs coarse extraction to identify the general region of the object, providing an initial understanding of the scene. The Region Amplifier (RA) then highlights and amplifies the output from the Optical-aware Feature Extractor (OFE). The second RGM further refines this enhanced output, and the final fusion, assisted by Hierarchical Feature Decoding (HFD), combines the results of the two RGMs to extract complementary information, optimizing COD performance. Experimental results demonstrate that our method significantly outperforms others, validating its effectiveness. To further support this, as your suggestion, we have included ablation results in the **Tab. 2** of revised manuscript. Here, in following Tabs. 1 and 2, we briefly show the results that remove the first RGM (i.e. configuration to input RGM after I_OFE passes through RA). These results show a clear decline in performance, further supporting the rationale behind incorporating both RGMs.\\n\\n**Table 1** Ablation study on RGM onTest-Easy Dataset\\n\\n| ID | Configuration |$F_{\\\\beta}^{\\\\omega}$ | $M$ |\\n|------|--------------------------|------------------------|-------|\\n| #1 | Full model (w/o 1-st RGM) | 0.631 | 0.157 |\\n| #2 | Full model | 0.815 | 0.079 |\\n| |\\n\\n**Table 2** Ablation study on RGM onTest-Hard Dataset\\n\\n| ID | Configuration | $F_{\\\\beta}^{\\\\omega}$ | $M$ |\\n|------|--------------------------|------------------------|-------|\\n| #1 | Full model (w/o 1-st RGM) | 0.539 | 0.162 |\\n| #2 | Full model | 0.705 | 0.098 |\\n| |\\n\\n\\n**Q2:Why not study more common object detection tasks?**\\n\\nWe greatly appreciate your concerns regarding the task we have conducted. This work builds upon our prior research on lensless-based common object detection, serving as an advanced extension tailored to more challenging scenarios. While it remains applicable to simpler tasks like common object detection, its primary focus is to address complex, confined environments. Examples include in-body detection with lensless endoscopy, where space is limited and visibility is challenging, or robotics navigating concealed and cluttered terrains that demand precise detection for effective navigation and task execution. By tackling these challenges, we aim to further the practical applications of lensless imaging technology and unlock its potential in specialized domains.\\n\\n**Q3:The design of the entire network framework and internal modules is relatively ordinary.**\\n\\nAs previously mentioned, lensless-based COD tasks are inherently challenging, and a single network architecture cannot effectively address these tasks. To overcome this, we propose the Region Gaze, Amplification, and Gaze-Again (RGA) mechanism, which progressively refines detection results through a multi-stage method. The OFE is tailored specifically to extract features critical for COD tasks, while the RGM introduces a learnable threshold to adaptively separate high- and low-frequency information, overcoming the limitations of fixed-frequency methods common in existing methodes. Additionally, our RA module uniquely amplifies object regions to support fine-grained detection. Together, these components establish a comprehensive paradigm for lensless-based COD tasks, transcending individual module designs to deliver a system optimized for the unique demands of lensless imaging. This framework aims to advance the practical applications of lensless imaging technology. Therefore, our framework is carefully structured and thoughtfully designed by based on the RGA mechanism, rather than being a mere stack or assembly of modules.\\n\\n**Q4:The format and layout are uncomfortable.**\\n\\nThank you for your insightful comments regarding the format and layout of our manuscript. In light of your suggestions, we implement a unified revision of the figures, tables, and equations in the updated version to improve their overall clarity and aesthetic presentation.\"}", "{\"comment\": \"Given our thorough response, we sincerely hope that you find it satisfactory and that it effectively addresses your concerns. We would greatly appreciate your feedback on whether our clarifications have resolved the issues raised.\"}", "{\"comment\": \"**Q5: I think OFE+encoder-decoder can achieve good results.**\\n\\nWe sincerely appreciate your constructive feedback. In our comparative experiments, all the methods used for comparison were integrated with the OFE to ensure a fair evaluation, as detailed in \\u201cSec 4.3 Compared Baselines\\u201d. These setups of comparison methods can all be described as OFE combined with a specific encoder-decoder architecture. However, the results in **Figs. 4-5 and Tab.1** clearly indicate that these comparison methods do not surpass the performance of our proposed method.\"}", "{\"comment\": \"**4. Methodological Contributions:**\\n\\nWhile the reviewer expressed concerns about the novelty of our method, we would like to clarify that our method is not a aggregation of existing works but a carefully coupled design inspired by the mechanisms of \\\"confirmation (fixation) and localized focus (magnification)\\\" employed by human vision to observe uncertain or camouflaged objects.\\n\\nFor the **OFE** module, while it may seem similar to designs in [2][4], we emphasize that our OFE essentially implements Wiener filtering, a principle widely adopted in various works [5][6]. As stated in our introduction, the uniqueness of our OFE lies in its task-driven learning of spatial information (e.g., camouflaged object regions), which differentiates it from the reconstruction-oriented designs in [2][4].\\n\\nRegarding the **Spatial-Frequency Enhancement**, our method is rooted in the types of information required for human object recognition, integrating both spatial and frequency domains. Unlike the fixed frequency decomposition in existing methods, our adaptive frequency selection mechanism introduces a novel adaptive thresholding strategy to decouple high- and low-frequency components. This adaptability is crucial for noise-invariant information separation and improved inference performance, an aspect overlooked in prior research. As evidenced by our comparative experiments ( **Fig. 10, Fig. 11, and Tab. 5** ), our method demonstrates clear advantages. **We hope the reviewer to recognize the distinctions in our mechanism design compared to existing works.**\\n\\nAdditionally, we propose a **Region Amplifier** module, which does not merely magnify the scene but leverages the coarse localization mask provided by the first RGM to adaptively enhance the object region. This selective amplification enriches the fine details of the region of interest, significantly improving recognition quality in the subsequent stages. **Importantly, the design is intentionally lightweight, involving only a single convolution layer and simple mathematical operations, ensuring minimal computational complexity. **\\n\\nIn summary, our work is not an aggregation of network elements but represents holistic innovation across tasks, datasets, engineering applications, and methodological paradigms. We sincerely encourage the reviewer to evaluate our work comprehensively rather than limiting the focus to network design aspects. We firmly believe our contributions offer significant value to the relevant research domains, and we respectfully reaffirm our request for a thorough assessment of our work.\", \"reference\": \"[1] Xiuxi Pan, Xiao Chen, Tomoya Nakamura, and MasahiroYamaguchi. Incoherent reconstruction free object recognition with mask-based lensless optics and the transformer. Optics Express, 29 (23):37962-37978, 2021\\n\\n[2] Xiangjun Yin, Huanjing Yue, Mengxi Zhang, Huihui Yue, Xingyu Cui, and Jingyu Yang. Inferrin objects from lensless imaging measurements. IEEE Transactions on Computational Imaging, 8: 1265-1276, 2022.\\n\\n[3] Haoran You, Yang Zhao, Cheng Wan, Zhongzhi Yu, and et al. EyeCoD: Eye tracking system acceleration via flatcam-based algorithm & hardware co-design. IEEE Micro, 43(4):88-97, 2023.\\n\\n[4] S. Khan and V. Sundar and V. Boominathan and A. Veeraraghavan and K. Mitra, et al. FlatNet: Towards Photorealistic Scene Reconstruction from Lensless Measurements. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2020.\\n\\n[5] Jiangxin Dong, Stefan Roth, and Bernt Schiele. Deep wiener deconvolution: wiener meets deep learning for image deblurring. In Proceedings of the 34th International Conference on Neural Information Processing Systems. 89, 1048-1059, 2020.\\n\\n[6] Dong, Jiangxin et al. DWDN: Deep Wiener Deconvolution Network for Non-Blind Image Deblurring. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44, 9960-9976, 2021.\"}", "{\"summary\": \"The authors propose a region gazeamplification network (RGANet) for progressively exploiting concealed objects from lensless imaging measurements.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. PHlatCam dataset is semantically labeled and contributes to the corresponding dataset as a benchmark and extensive experiments.\\n2. investigate the detection of concealed objects in lensless imaging scenarios.\", \"weaknesses\": \"1. The use of RGM twice in the network is confusing. The ablation experiment only studies the effectiveness of the combination of internal modules under the setting of RGM twice. It sounds more reasonable to input RGM after I_OFE passes through RA.\\n2. Under the condition of lensless cameras, the author studies concealed object detection. Such a combination of tasks makes people doubt whether its real application scenarios are wide. Why not study more common object detection tasks?\\n3. The design of the entire network framework and internal modules is relatively ordinary. Basically, it is based on existing network modules with certain modifications, giving people a feeling of an A+B combination.\\n4. The format and layout are uncomfortable, for example, the formulas in the paper have larger line spacing. In addition, the figures in the paper are not beautiful enough, and the color matching is abrupt.\\n5. From the ablation experiment result #10 in Table 2, we can see that OFE is the most important core part of the network. I think OFE+encoder-decoder can achieve good results.\", \"questions\": \"See Weaknesses for details.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Given the detailed response from the authors, I have decided to update my score from 5 to 6. Still, the contribution of this work is borderline overall in my opinion.\"}", "{\"comment\": \"Thank you sincerely for these valuable comments and acknowledgment of our work. To ensure clarity and address your concerns comprehensively, we respond to each comment individually:\\n\\n**Weaknesses:**\\n\\n**Q1:Additional details about the setup of the real-capture experiments would enhance the reproducibility and understanding of the method.**\\n\\nWe sincerely appreciate your valuable feedback regarding the setup. Below are the requested details, which we will include in the revised manuscript ( **Appendix A.1** ) as:\\n**(1) Distance between the PHlatCam and the display:** The PHlatCam was positioned **42 cm** from the display throughout all real-capture experiments. This distance was carefully selected to optimize image capture, considering the camera\\u2019s field of view and resolution. This configuration remained consistent during both training and testing phases, ensuring uniform alignment of camera and monitor pixels.\\n\\n-**(2) Display specifications:**\\n\\n- **Model and Type:** The display used was a **Dell S2425HS** , which is an **LCD** screen.\\n- **Size:** The screen size was **24 inches** , with a resolution of **1920\\u00d71080 pixels** .\\n\\n**(3) Additional Notes:** The image was resized via bicubic interpolation to fit the largest central square on the monitor. The white balance for PHlatCam was calibrated using the automatic white balance setting of the PointGrey Flea3 camera, determined when an all-white image was displayed on the monitor. The exposure time was governed by the camera's automatic mode, with gain fixed at 0 dB.\\n\\n**Q2\\uff1aadditional experiments using non-display-based scenes to validate the model's performance.** \\nWe appreciate your concern regarding the potential bias introduced by using a screen-based dataset, especially since lensless cameras can capture a broader range of wavelengths compared to standard RGB displays.\\n\\nTo address this, we conduct additional experiments using a dataset that captures camouflage scenarios in natural environments, free from screen-based biases. This dataset, consisting of 30 pairs of lensless imaging data, reconstructed scenes, and ground truths, covers a broader range of wavelengths, allowing us to evaluate the model performance in unfiltered, real-world conditions. The results highlight the model\\u2019s effectiveness and robustness in real-world conditions, extending its capabilities beyond screen-based data.\\n\\nThese results are included in the **Appendix A.4 and Fig.11** of revised manuscript. However, we acknowledge that a larger and more diverse set of non-display-based data would offer even more robust validation of the model's performance. This will be explored in future work to further assess and refine the model across a wider range of real-world scenarios.\"}", "{\"summary\": \"The authors aim to address concealed object detection (COD) with lensless imaging systems. They propose a network for COD leveraging spatial and frequency feature enhancement and fusion. They also annotate 2600 paired images from the Display Captured Dataset to build a new dataset for COD with lensless imaging systems. However, the authors should design a special network for this task by considering the characteristics of the lensless camera, and clarify the details of the proposed dataset.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. A new dataset for COD with lenless imaging system.\\n2. Good performance for a new setting.\", \"weaknesses\": \"1. Straightforward method with limited novelty. The authors do not analyze the challenge for COD with lensless cameras. The main difference in design for the lensless cameras is the optical-aware feature extraction, but it refers to [1]. In addition, the main module of the proposed method is the spatial-frequency enhance module, which directly uses the idea of existing works for COD [2, 3].\\n\\n2. Insufficient experiments. The authors should compare with sota COD methods, such as [2,3,4]. Moreover, they should compare with the two-stage methods (Lensless imaging methods combined with COD methods). With the large parameters and computational cost, the two-stage methods may be more lightweight.\\n\\n3. Details of the proposed dataset. The authors should provide a detailed analysis of the proposed dataset to clarify the difference between the dataset from [5], since some samples shown in the paper are the same as samples in [5].\\n\\n[1] PhlatCam: Designed Phase-Mask Based Thin Lensless Camera. TPAMI2020.\\n[2] Frequency perception network for camouflaged object detection. MM2023.\\n[3] Camouflaged object detection with feature decomposition and edge reconstruction. CVPR2023.\\n[4] Feature shrinkage pyramid for camouflaged object detection with transformers.CVPR2023.\\n[5] Inferring Objects FromLensless Imaging Measurements. TCI2022\", \"questions\": \"1. Does the performance improvement come from the large parameters and computational cost?\\n2. Error in Figure 2, where the sigmoid function is directly fed to addition without any input in PVT.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }